Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Bring up Genius

45 Viliam 08 June 2017 05:44PM

(This is a "Pareto translation" of Bring up Genius by László Polgár, the book recently mentioned at Slate Star Codex. I hope that selected 20% of the book text, translated approximately, could still convey 80% of its value, while taking an order of magnitude less time and work than a full and precise translation. The original book is written in an interview form, with questions and answers; to keep it short, I am rewriting it as a monologue. I am also taking liberty of making many other changes in style, and skipping entire parts, because I am optimizing for my time. Instead of the Hungarian original, I am using an Esperanto translation Eduku geniulon as my source, because that is the language I am more fluent in.)

Introduction

Genius = work + luck

This is my book written in 1989 about 15 years of pedagogic experiment with my daughters. It is neither a recipe, nor a challenge, just a demonstration that it is possible to bring up a genius intentionally.

The so-called miracle children are natural phenomena, created by their parents and society. Sadly, many potential geniuses disappear without anyone noticing the opportunity, including themselves.

Many people in history did a similar thing by accident; we only repeated it on purpose.

1. Secrets of the pedagogic experiment

1.1. The Polgár family

The Polgár sisters (Susan, Sofia, Judit) are internationally as famous as Rubik Ernő, the inventor of the Rubik Cube.

Are they merely their father's puppets, manipulated like chess figures? Hardly. This level of success requires agency and active cooperation. Puppets don't become geniuses. Contrariwise, I provided them opportunity, freedom, and support. They made most of the decisions.

You know what really creates puppets? The traditional school system. Watch how kids, eagerly entering school in September, mostly become burned out by Christmas.

Not all geniuses are happy. Some are rejected by their environment, or they fail to achieve their goals. But some geniuses are happy, accepted by their environment, succeed, and contribute positively to the society. I think geniuses have a greater chance to be happy in life, and luckily my daughters are an example of that.

I was a member of the Communist Party for over ten years, but I disagreed with many things; specifically the lack of democracy, and the opposition to elite education.

I work about 15 hours a day since I was a teenager. I am obsessed with high quality. Some people say I am stubborn, even aggressive. I am trying hard to achieve my goals, and I experienced a lot of frustration; seems to me some people were trying to destroy us. We were threatened by high-ranking politicians. We were not allowed to travel abroad until 1985, when Susan was already the #1 in international ranking of female chess players.

But I am happy that I have a great family, happy marriage, three successful children, and my creative work has an ongoing impact.

1.2 Nature or nurture?

I believe that any biologically healthy child can be brought up to a genius. Me and my wife have read tons of books and studies. Researching the childhoods of many famous people that they all specialized early, and each of them had a strongly supportive parent or teacher or trainer. We concluded: geniuses are not born; they are made. We proved that experimentally. We hope that someone will build a coherent pedagogical system based on our hypothesis.

Most of what we know about genetics [as of 1989] is about diseases. Healthy brains are flexible. Education was considered important by Watson and Adler. But Watson never actually received the "dozen healthy infants" to bring up, so I was the first one to do this experiment. These are my five principles:

* Human personality is an outcome of the following three: the gifts of nature, the support of environment, and the work of one's own. Their relative importance depends on age: biology is strongest with the newborn, society with the ten years old, and later the importance of one's own actions grows.

* There are two aspects of social influence: the family, and the culture. Humans are naturally social, so education should treat the child as a co-author of themselves.

* I believe that any healthy child has sufficient general ability, and can specialize in any type of activity. Here I differ from the opinion of many teachers and parents who believe that the role of education is to find a hidden talent in the child. I believe that the child has a general ability, and achieves special skills by education.

* The development of the genius needs to be intentionally organized; it will not happen at random.

* People should strive for maximum possible self-realization; that brings happiness both to them and to the people around them. Pedagogy should not aim for average, but for excellence.

2. A different education

2.1. About contemporary schools

We homeschooled our children. Today's schools set a very low bar, and are intolerant towards people different from the average by their talent or otherwise. They don't prepare for real life; don't make kids love learning; don't instigate greater goals; bring up neither autonomous individuals nor collectives.

Which is an unsurprising outcome, if you only have one type of school, each school containing a few exceptional kids among many average ones and a few feeble ones. Even the average ones are closer to the feeble ones that to the exceptional ones. And the teacher, by necessity, adapts to the majority. There is not enough space for individual approach, but there is a lot of mindless repetition. Sure, people talk a lot about teaching problem-solving skills, but that never happens. Both the teachers and the students suffer at school.

The gifted children are bored, and even tired, because boredom is more tedious than appropriate effort. The gifted children are disliked, just like everyone who differs from the norm. Many gifted children acquire psycho-somatic problems, such as insomnia, headache, stomach pain, neuroses. Famous people often had trouble at school; they were considered stupid and untalented. There is bullying, and general lack of kindness. There are schools for gifted children in USA and USSR, but somehow not in Hungary [as of 1989].

I had to fight a lot to have my first daughter home-schooled. I was afraid school would endanger the development of her abilities. We had support of many people, including pedagogues, but various bureaucrats repeatedly rejected us, sometimes with threats. Finally we received an exceptional permission by the government, but it only applied for one child. So with the second daughter we had to go through the same process again.

2.2. Each child is a promise

It is crucial to awaken and keep the child's interest, convince them that the success is achievable, trust them, and praise them. When the child likes the work, it will work fruitfully for long time periods. A profound interest develops personality and skills. A motivated child will achieve more, and get tired less.

I believe in positive motivation. Create a situation where many successes are possible. Successes make children confident; failures make them insecure. Experience of success and admiration by others motivates and accelerates learning. Failure, fear, and shyness decrease the desire to achieve. Successes in one field even increase confidence in other fields.

Too much praise can cause overconfidence, but it is generally safer to err on the side of praising more rather than less. However, the praise must be connected to a real outcome.

Discipline, especially internal psychological, also increases skills.

I believe the age between 3 and 6 years is very important, and very underestimated. No, those children are not too young to learn. Actually, that's when their brains are evolving the most. They should learn foreign languages. In multilingual environments children do that naturally.

Play is important for children, but play is not an opposite of work. Gathering information and solving problems is fun. Provide meaningful activities, instead of compartmentalized games. A game without learning is merely a surrogate activity. Gifted children prefer games that require mental activity. There is a continuum between learning and playing (just like between work and hobby for adults). Brains, just like muscles, becomes stronger by everyday activity.

My daughters used intense methods to learn languages; and chess; and table tennis. Is there a risk of damaging their personality by doing so? Maybe, but I believe the risks of damaging the personality by spending six childhood years without any effort are actually greater.

When my daughters were 15, 9, 8 years old, we participated in a 24-hour chess tournament, where you had to play 100 games in 24 hours. (Most participants were between age 25 and 30.) Susan won. The success rates during the second half of the tournament were similar to those during the first half of the tournament, for all three girls, which shows that children are capable of staying focused for long periods of time. But this was an exceptional load.

2.3. Genius - a gift or a curse?

I am not saying that we should bring up each child as a genius; only that bringing up children as geniuses is possible. I oppose uniform education, even a hypothetical one that would use my methods.

Public ideas of geniuses is usually one of two extremes. Either they are all supposed to be weird and half-insane; or they are all supposed to be CEOs and movie stars. Psychology has already moved beyond this. They examined Einstein's brain, but found no difference in weight or volume compared with an average person. For me, genius is an average person who has achieved their full potential. Many famous geniuses attribute their success to hard work, discipline, attention, love of work, patience, time.

All healthy newborns are potential geniuses, but whether they become actual geniuses, depends on their environment, education, and their own effort. For example, in the 20th century more people became geniuses than in the 19th or 18th century, inter alia because of social changes. Geniuses need to be liberated. Hopefully in the future, more people will be free and fully developed, so being a genius will become a norm, not an exception. But for now, there are only a few people like that. As people grow up, they lose the potential to become geniuses. I estimate that an average person's chance to become a genius is about 80% at age 1; 60% at age 3; 50% at age 6; 40% at age 12; 30% at age 16; 20% at age 18; only 5% at age 20. Afterwards it drops to a fraction of percent.

A genius child can surpass their peers by 5 or 7 years. And if a "miracle child" doesn't become a "miracle adult", I am convinced that their environment did not allow them to. People say some children are faster and some are slower; I say they don't grow up in the same conditions. Good conditions allow one to progress faster. But some philosophers or writers became geniuses at old age.

People find it difficult to accept those who differ from the average. Even some scientists; for example Einstein's theory of relativity was opposed by many. My daughters are attacked not just by public opinion, but also by fellow chess players.

Some geniuses are unhappy about their situation. But many enjoy the creativity, perceived beauty, and success. Geniuses can harm themselves by having unrealistic expectations of their goals. But most of the harm comes from outside, as a dismissal of their work, or lack of material and moral support, baseless criticism. Nowadays, one demagogue can use the mass communication media to poison the whole population with rage against the representatives of national culture.

As the international communication and exchange of ideas grows, geniuses become more important than ever before. Education is necessary to overcome economical problems; new inventions create new jobs. But a genius provokes the anger of people, not by his behavior, but by his skills.

2.4. Should every child become a celebrity?

I believe in diversity in education. I am not criticizing teachers for not doing things my way. There are many other attempts to improve education. But I think it is now possible to aim even higher, to bring up geniuses. I can imagine the following environments where this could be done:

* Homeschooling, i.e. teaching your biological or adopted children. Multiple families could cooperate and share their skills.

* Specialized educational facility for geniuses; a college or a family-type institution.

Homeschooling, or private education with parental oversight, are the ancient methods for bringing up geniuses. Families should get more involved in education; you can't simply outsource everything to a school. We should support families willing to take an active role. Education works better in a loving environment.

Instead of trying to a find a talent, develop one. Start specializing early, at the age of 3 or 4. One cannot become an expert on everything.

My daughters played chess 5 or 6 hours a day since their age of 4 or 5. Similarly, if you want ot become a musician, spend 5 or 6 hours a day doing music; if a physicist, do physics; if a linguist, do languages. With such intense instruction, the child will soon feel the knowledge, experience success, and soon becomes able to use this knowledge independently. For example, after learning Esperanto 5 or 6 hours a day for a few months, the child can start corresponding with children from other countries, participate at international meet-ups, and experience the conversations in a foreign language. That is at the same time pleasant, useful for the child, and useful for the society. The next year, start with English, then German, etc. Now the child enjoys this, because it obviously makes sense. (Unlike at school, where most learning feels purposeless.) In chess, the first year makes you an average player, three years a great player, six years a master, fifteen years a grandmaster. When a 10-years old child surpasses an average adult at some skill, it is highly motivating.

Gifted children need financial support, to cover the costs of books, education, and travel.

Some people express concern that early specialization may lead to ignorance of everything else. But it's the other way round; abilities formed in one area can transfer to other areas. One learns how to learn.

Also, the specialization is relative. If you want to become e.g. a computer programmer, you will learn maths, informatics, foreign languages; when you become famous, you will travel, meet interesting people, experience different cultures. My daughters, in addition to being chess geniuses, speak many foreign languages, travel, do sports, write books, etc. Having deep knowledge about something doesn't imply ignorance about everything else. On the other hand, a misguided attempt to become an universalist can result in knowing nothing, in mere pretend-knowledge of everything.

Emotional and moral education must do together with the early specialization, to develop a complex personality. We wanted our children to be enthusiastic, courageous, persistent, to be objective judges of things and people, to resist failure and avoid temptations of success, to handle frustration and tolerate criticism even when it is wrong, to make plans, to manage their emotions. Also, to love and respect people, and to prefer creative work to physical pleasure or status symbols. We told them that they can achieve greatness, but that there can be only one world champion, so their goal should rather be to become good chess players, be good at sport, and be honest people.

Pedagogy puts great emphasis on being with children of the same age. I think that mental peers are more important than age peers. It would harm a gifted child to be forced to spend most of their time exclusively among children of the same age. On the other hand, spending most of the time with adults brings the risk that the child will learn to rely on them all the time, losing independence and initiative. You need to find a balance. I believe the best company would be of similar intellectual level, similar hobbies, and good relations.

For example, if Susan at 13 years old would be forced to play chess exclusively with 13 years old children, it would harm both sides. She could not learn anything from them; they would resent losing constantly.

Originally, I hoped I could bring up each daughter as a genius in a different field (e.g. mathematics, chess, music). It would be a more convincing evidence that you can bring up a genius of any kind. And I believe I would have succeeded, but I was constrained by money and time. We would need three private teachers, would have to go each day to three different places, would have to buy books for maths and chess and music (and the music instruments). By making them one team, things became easier, and the family has more things in common. Some psychologists worried that children could be jealous of each other, and hate each other. But we brought them up properly, and this did not happen.

This is how I imagine a typical day at a school for geniuses:

* 4 hours studying the subject of specialization, e.g. chess;

* 1 hour studying a foreign language; Esperanto at the first year, English at the second, later choose freely; during the first three months this would increase to 3 hours a day (by reducing the subject of specialization temporarily); traveling abroad during the summer;

* 1 hour computer science;

* 1 hour ethics, psychology, pedagogy, social skills;

* 1 hour physical education, specific form chosen individually.

Would I like to teach at such school? In theory yes, but in practice I am already burned out from the endless debates with authorities, the press, opinionated pedagogues and psychologists. I am really tired of that. The teachers in such school need to be protected from all this, so they can fully focus on their work.

2.5. Esperanto: the first step in learning foreign languages

Our whole family speaks Esperanto. It is a part of our moral system, a tool for equality of people. There are many prejudices against it, but the same was true about all progressive ideas. Some people argue by Bible that multiple languages are God's punishment we have to endure. Some people invested many resources into learning 2 or 3 or 4 foreign languages, and don't want to lose the gained position. Economically strong nations enforce their own languages as part of dominance, and the speakers of other languages are discriminated against. Using Esperanto as everyone's second language would make the international communication more easy and egalitarian. But considering today's economical pressures, it makes sense to learn English or Russian or Chinese next.

Esperanto has a regular grammar with simple syntax. It also uses many Latin, Germanic, and Slavic roots, so as a European, even if you are not familiar with the language, you will probably recognize many words in a text. This is an advantage from pedagogical point of view: you can more easily learn its vocabulary and its grammar; you can learn the whole language about 10 times easier than other languages.

It makes a great example of the concept of a foreign language, which pays off when learning other languages later. It is known that having learned one foreign language makes learning another foreign language easier. So, if learning Esperanto takes 10 times less time than learning another language, such as English, then if already knowing another foreign language makes learning the second one at least 10% more efficient, it makes sense to learn Esperanto first. Also, Esperanto would be a great first experience for students who have difficulty learning languages; they would achieve success faster.

3. Chess

3.1. Why chess?

Originally, we were deciding between mathematics, chess, and foreign languages. Finally we chose chess, because the results in that area are easy to measure, using a traditional and objective system, which makes it easier to prove whether the experiment succeeded or failed. Which was a lucky choice in hindsight, because back then we had no idea how many obstacles we will have to face. If we wouldn't be able to prove our results unambiguously, the attacks against us would have been much stronger.

Chess seemed sufficiently complex (it is a game, a science, an art, and a sport at the same time), so the risks of overspecialization were smaller; even if children would later decide they are tired of chess, they would keep some transferable skills. And the fact that our children were girls was a bonus: we were able to also prove that girls can be as intellectually able as boys; but for this purpose we needed an indisputable proof. (Although, people try to discount this proof anyway, saying things like: "Well, chess is simple, but try doing the same in languages, mathematics, or music!")

The scientific aspect of chess is that you have to follow the rules, analyze the situation, apply your intuition. If you have a favorite hypothesis, for example a favorite opening, but you keep losing, you have to change your mind. There is an aesthetic dimension in chess; some games are published and enjoyed not just because of their impressive logic, but because they are beautiful in some sense, they do something unexpected. And most people are not familiar with this chess requires great physical health. All the best chess players do some sport, and it is not a coincidence. Also it is organized similarly to sports: it has tournaments, players, spectators; you have to deal with the pain of losing, you have to play fair, etc.

3.2. How did the Polgár sisters start learning chess?

I don't have a "one weird trick" to teach children chess; it's just my general pedagogical approach, applied to chess. Teach the chess with love, playfully. Don't push it too forcefully. Remember to let the child win most of the time. Explain to the child that things can be learned, and that this also applies to chess. Don't worry if the child keeps jumping during the game; it could be still thinking about the game. Don't explain everything; provide the child an opportunity to discover some things independently. Don't criticize failure, praise success.

Start with shorter lessons, only 30 minutes and then have a break. Start by solving simple problems. Our girls loved the "checkmate in two/three moves" puzzles. Let the child play against equally skilled opponents often. For a child, it is better to play many quick games (e.g. with 5-minute timers), than a few long ones. Participate in tournaments appropriate for the child's current skill.

We have a large library of different games. They are indexed by strategy, and by names of players. So the girls can research their opponent's play before the tournament.

When a child loses the tournament, don't criticize them; the child is already sad. Offer support; help them analyze the mistakes.

When my girls write articles about chess, it makes them think deeply about the issue.

All three parts of the game opening, middle game, ending require same amount of focus. Some people focus too much on the endings, and neglect the rest. But at tournament, a bad opening can ruin the whole game.

Susan had the most difficult situation of the three daughters. In hindsight, having her learn 7 or 8 foreign languages was probably too much; some of that time would be better spent further improving her chess skills. As the oldest one, she also faced the worst criticism from haters; as a consequence she became the most defensive player of them. The two younger sister had the advantage that they could oppose the same pressures together. But still, I am sure that without those pressures, they also could have progressed even faster.

Politicians influenced the decisions of the Hungarian Chess Association; as a result my daughters were often forbidden from participation at international youth competitions, despite being the best national players. They wanted to prevent Susan from becoming the worldwide #1 female chess player. Once they even "donated" 100 points to her competitor, to keep Susan at the 2nd place. Later they didn't allow her to participate in the international male tournaments, although her results in the Hungarian male tournaments qualified her for that. The government regularly refused to issue passports to us, claiming that "our foreign travels hurt the public order". Also, it was difficult to find a trainer for my daughters, despite them being at the top of world rankings. Only recently we received a foreign help; a patron from Netherlands offered to pay trainers and sparring partners for my daughters, and also bought Susan a personal computer. A German journalist gave us a program and a database, and taught children how to use it.

The Hungarian press kept attacking us, published fake facts. We filed a few lawsuits, and won them all, but it just distracted us from our work. The foreign press whether writing from the chess, psychological, or pedagogical perspectives was fair to us; they wrote almost 40 000 articles about us, so finally even the Hungarian chess players, psychologists and pedagogues could learn about us from them.

At the beginning, I was a father, a trainer, and a manager to my daughters. But I am completely underqualified to be their trainer these days, so I just manage their trainers.

Until recently no one believed women could play chess on level comparable with men. Now the three girls together have about 40 Guiness records; they repeatedly outperformed their former records. In a 1988 interview Karpov said: "Susan is extraordinarily strong, but Judit... at such age, neither me nor Kasparov could play like Judit plays."

3.3. How can we make our children like chess?

Some tips for teaching chess to 4 or 5 years old children. First, I made a blank square divided into 8x8 little squares, with named rows and columns. I named a square, my daughter had to find it; then she named a square and I had to find it. Then we used the black-and-white version, and we were guessing the color of the named square without looking.

Then we introduced kings, in a "king vs king" combat; the task was to reach the opposing row of the board with your king. Then we added a pawn; the goal remained to reach the opposing row. After a month of playing, we introduced the queen, and the concept of checkmate. Later we gradually added the remaining pieces (knights were the most difficult).

Then we solved about thousand "checkmate in one move" puzzles. Then two moves, three moves, four moves. That took another 3 or 4 months. And only afterwards we started really playing against each other.

To provide an advantage for the child, don't play with less pieces, because that changes the structure of the game. Instead, provide yourself a very short time limit, or deliberately make a mistake, so the child can learn to notice them.

Have patience, if some phase takes a lot of time. On stronger fundamentals, you can later build better. This is where I think our educational system makes great mistakes. Schools don't teach intensely, so children keep forgetting most of what they learned during the long spaces between the lessons. And then, despite not having fully mastered the first step, they move to the second one, etc.

3.4. Chess and psychology

Competitive chess helps develop personality: will, emotion, perseverance, self-discipline, focus, self-control. It develops intellectual skills: memory, combination skills, logic, proper use of intuition. Understanding your opponent's weakness will help you.

People overestimate how much IQ tests determine talent. Measurements of people talented in different areas show that their average is only a bit above the average of the population.

3.5. Emancipation of women

Some people say, incorrectly, that my daughter won the male chess championship. But there is officially no such thing as "male chess championship", there is simply chess championship, open to both men and women. (And then, there is a separate female chess championship, only for women, but that is considered second league.)

I prepared the plan for my children before they were born. I didn't know I would have all girls, so I did not expect this special problem: the discrimination of women. I wanted to bring up my daughter Susan exactly according to the plan, but many people tried to prevent it; they insisted that she cannot compete with boys, that she should only compete with girls. Thus my original goal of proving that you can bring up a genius, became indirectly a goal of proving that there are no essential intellectual differences between men and women, and therefore one can't use that argument as an excuse for subjugation of women.

People kept telling me that I can only bring up Susan to be a female champion, not to compete with men. But I knew that during elementary school, girls can compete with boys. Only later, when they start playing the female role, when they are taught to clean the house, wash laundry, cook, follow the fashion, pay attention to details of clothing, and try getting married as soon as possible when they are expected to do other things than boys are expected to do that has a negative impact on developing their skills. But family duties and bringing up children can be done by both parents together.

Women can achieve same results, if they can get similar conditions. I tried to do that for my daughters, but I couldn't convince the whole society to treat them the same.

We know about differences between adult men and women, but we don't know whether they were caused by biology or education. And we know than e.g. in mathematics and languages, during elementary and high schools girls progress at the same pace as boys, and only later the differences appear. This is an evidence in favor of equality. We do not know what children growing up without discrimination would be like.

On the other hand, the current system also provides some advantages for women; for example the female chess players don't need to work that hard to become the (female) elite, and some of them don't want to give that up. Such women are among the greatest opponents of my daughters.

4. The meaning of this whole affair

4.1. Family value

I am certain that without a good family background the success of my daughters would not be possible. It is important, before people marry, to have a clear idea of what expect from their marriage. When partners cooperate, the mutual help, the shared experiences, education of children, good habits, etc. can deepen their love. Children need family without conflicts to feel safe. But of course, if the situation becomes too bad, the divorce might become the way to reduce conflicts.

To bring up a genius, it is desirable for one parent to stay at home and take care of children. But it can be the father, too.

[Klára Polgár says:] When I met László, my first impression was that he was an interesting person full of ideas, but one should not believe even half of them.

When Susan was three and half, László said it was time for her to specialize. She was good at math; at the age of four she already learned the material of the first four grades. Once she found chess figures in the box, and started playing with them as toys. László was spending a lot of time with her, and one day I was surprised to see them playing chess. László loved chess, but I never learned it.

So, we could have chosen math or foreign languages, but we felt that Susan was really happy playing chess, and she started being good at it. But our parents and neighbors shook their heads: "Chess? For a girl?" People told me: "What kind of a mother are you? Why do you allow your husband to play chess with Susan?" I had my doubts, but now I believe I made the right choice.

People are concerned whether my children had real childhood. I think they are at least as happy as their peers, probably more.

I always wanted to have a good, peaceful family life, and I believe I have achieved that. [End of Klára's part.]

4.2. Being a minority

It is generally known that Jewish people achieved many excellent results in intellectual fields. Some ask whether the cause of this is biologic or social. I believe it is social.

First, Jewish families are usually traditional, stable, and care a lot about education. They knew that they will be discriminated against, and will have to work twice as hard, and that at any moment they may be forced to leave their home, or even country, so their knowledge might be the only thing they will always be able to keep. Jewish religion requires parents to educate their children since early childhood; Talmud requires parents to become the child's first teachers.

4.3. Witnesses of the genius education: the happy children

I care about happiness of my children. But not only I want to make them happy, I also want to develop their ability to be happy. And I think that being a genius is the most certain way. The life of a genius may be difficult, but happy anyway. On the other hand, average people, despite seemingly playing it safe, often become alcoholics, drug addicts, neurotics, loners, etc.

Some geniuses become unhappy with their profession. But even then I believe it is easier for a genius to change professions.

Happiness = work + love + freedom + luck

People worry whether child geniuses don't lose their childhood. But the average childhood is actually not as great as people describe it; many people do not have a happy childhood. Parents want to make their children happy, but they often do it wrong: they buy them expensive toys, but they don't prepare them for life; they outsource that responsibility to school, which generally does not have the right conditions.

And when parents try to fully develop the capabilities of their children, instead of social support they usually get criticism. People will blame them for being overly ambitious, for pushing the children to achieve things they themselves failed at. I personally know people who tried to educate their children similarly to how we did, but the press launched a full-scale attack against them, and they gave up.

My daughters' lives are full of variety. They have met famous people: presidents, prime ministers, ambassadors, princess Diana, millionaires, mayors, UN delegates, famous artists, other olympic winners. They appeared in television, radio, newspapers. They traveled around the whole world; visited dozens of famous places. They have hobbies. They have friends in many parts of the world. And our house is always open to guests.

4.4. Make your life an ethical model

People reading this text may be surprised that they expected a rational explanation, while I mention emotions and morality a lot. But those are necessary for good life. Everyone should try to improve themselves in these aspects. The reason why I did not give up, despite all the obstacles and malice, is because for me, to live morally and create good, is an internal law. I couldn't do otherwise. I already know that even writing this very book will initiate more attacks, but I am doing it regardless.

And morality is also a thing we are not born with, but which needs to be taught to us, preferably in infancy. And we need to think about it, instead of expecting it to just happen. And the schools fail in this, too. I see it as an integral part of bringing up a genius.

One should aim to be a paragon; to live in a way that will make others want to follow you. Learn and work a lot; expect a lot from yourself and from others. Give love, and receive love. Live in peace with yourself and your neighbors. Work hard to be happy, and to make other people happy. Be a humanist, fight against prejudice. Protect the peace of the family, bring up your children towards perfection. Be honest. Respect freedom of yourself and of the others. Trust humanity; support the communities small and large. Etc.

(The book finishes by listing the achievements of the Polgár sisters, and by their various photos: playing chess, doing sports. I'll simply link their Wikipedia pages: Susan, Sofia, Judit. I hope you enjoyed reading this experimental translation; and if you think I omitted something important, feel free to add the missing parts in the comments. Note: I do believe that this book is generally correct and useful, but that doesn't mean I necessarily agree with every single detail. The opinions expressed here belong to the author; of course, unless some of them got impaired by my hasty translation.)

Effective altruism is self-recommending

42 Benquo 21 April 2017 06:37PM

A parent I know reports (some details anonymized):

Recently we bought my 3-year-old daughter a "behavior chart," in which she can earn stickers for achievements like not throwing tantrums, eating fruits and vegetables, and going to sleep on time. We successfully impressed on her that a major goal each day was to earn as many stickers as possible.

This morning, though, I found her just plastering her entire behavior chart with stickers. She genuinely seemed to think I'd be proud of how many stickers she now had.

The Effective Altruism movement has now entered this extremely cute stage of cognitive development. EA is more than three years old, but institutions age differently than individuals.

What is a confidence game?

In 2009, investment manager and con artist Bernie Madoff pled guilty to running a massive fraud, with $50 billion in fake return on investment, having outright embezzled around $18 billion out of the $36 billion investors put into the fund. Only a couple of years earlier, when my grandfather was still alive, I remember him telling me about how Madoff was a genius, getting his investors a consistent high return, and about how he wished he could be in on it, but Madoff wasn't accepting additional investors.

What Madoff was running was a classic Ponzi scheme. Investors gave him money, and he told them that he'd gotten them an exceptionally high return on investment, when in fact he had not. But because he promised to be able to do it again, his investors mostly reinvested their money, and more people were excited about getting in on the deal. There was more than enough money to cover the few people who wanted to take money out of this amazing opportunity.

Ponzi schemes, pyramid schemes, and speculative bubbles are all situations in investors' expected profits are paid out from the money paid in by new investors, instead of any independently profitable venture. Ponzi schemes are centrally managed – the person running the scheme represents it to investors as legitimate, and takes responsibility for finding new investors and paying off old ones. In pyramid schemes such as multi-level-marketing and chain letters, each generation of investor recruits new investors and profits from them. In speculative bubbles, there is no formal structure propping up the scheme, only a common, mutually reinforcing set of expectations among speculators driving up the price of something that was already for sale.

The general situation in which someone sets themself up as the repository of others' confidence, and uses this as leverage to acquire increasing investment, can be called a confidence game.

Some of the most iconic Ponzi schemes blew up quickly because they promised wildly unrealistic growth rates. This had three undesirable effects for the people running the schemes. First, it attracted too much attention – too many people wanted into the scheme too quickly, so they rapidly exhausted sources of new capital. Second, because their rates of return were implausibly high, they made themselves targets for scrutiny. Third, the extremely high rates of return themselves caused their promises to quickly outpace what they could plausibly return to even a small share of their investor victims.

Madoff was careful to avoid all these problems, which is why his scheme lasted for nearly half a century. He only promised plausibly high returns (around 10% annually) for a successful hedge fund, especially if it was illegally engaged in insider trading, rather than the sort of implausibly high returns typical of more blatant Ponzi schemes. (Charles Ponzi promised to double investors' money in 90 days.) Madoff showed reluctance to accept new clients, like any other fund manager who doesn't want to get too big for their trading strategy.

He didn't plaster stickers all over his behavior chart – he put a reasonable number of stickers on it. He played a long game.

Not all confidence games are inherently bad. For instance, the US national pension system, Social Security, operates as a kind of Ponzi scheme, it is not obviously unsustainable, and many people continue to be glad that it exists. Nominally, when people pay Social Security taxes, the money is invested in the social security trust fund, which holds interest-bearing financial assets that will be used to pay out benefits in their old age. In this respect it looks like an ordinary pension fund.

However, the financial assets are US Treasury bonds. There is no independently profitable venture. The Federal Government of the United States of America is quite literally writing an IOU to itself, and then spending the money on current expenditures, including paying out current Social Security benefits.

The Federal Government, of course, can write as large an IOU to itself as it wants. It could make all tax revenues part of the Social Security program. It could issue new Treasury bonds and gift them to Social Security. None of this would increase its ability to pay out Social Security benefits. It would be an empty exercise in putting stickers on its own chart.

If the Federal government loses the ability to collect enough taxes to pay out social security benefits, there is no additional capacity to pay represented by US Treasury bonds. What we have is an implied promise to pay out future benefits, backed by the expectation that the government will be able to collect taxes in the future, including Social Security taxes.

There's nothing necessarily wrong with this, except that the mechanism by which Social Security is funded is obscured by financial engineering. However, this misdirection should raise at least some doubts as to the underlying sustainability or desirability of the commitment. In fact, this scheme was adopted specifically to give people the impression that they had some sort of property rights over their social Security Pension, in order to make the program politically difficult to eliminate. Once people have "bought in" to a program, they will be reluctant to treat their prior contributions as sunk costs, and willing to invest additional resources to salvage their investment, in ways that may make them increasingly reliant on it.

Not all confidence games are intrinsically bad, but dubious programs benefit the most from being set up as confidence games. More generally, bad programs are the ones that benefit the most from being allowed to fiddle with their own accounting. As Daniel Davies writes, in The D-Squared Digest One Minute MBA - Avoiding Projects Pursued By Morons 101:

Good ideas do not need lots of lies told about them in order to gain public acceptance. I was first made aware of this during an accounting class. We were discussing the subject of accounting for stock options at technology companies. […] One side (mainly technology companies and their lobbyists) held that stock option grants should not be treated as an expense on public policy grounds; treating them as an expense would discourage companies from granting them, and stock options were a vital compensation tool that incentivised performance, rewarded dynamism and innovation and created vast amounts of value for America and the world. The other side (mainly people like Warren Buffet) held that stock options looked awfully like a massive blag carried out my management at the expense of shareholders, and that the proper place to record such blags was the P&L account.

Our lecturer, in summing up the debate, made the not unreasonable point that if stock options really were a fantastic tool which unleashed the creative power in every employee, everyone would want to expense as many of them as possible, the better to boast about how innovative, empowered and fantastic they were. Since the tech companies' point of view appeared to be that if they were ever forced to account honestly for their option grants, they would quickly stop making them, this offered decent prima facie evidence that they weren't, really, all that fantastic.

However, I want to generalize the concept of confidence games from the domain of financial currency, to the domain of social credit more generally (of which money is a particular form that our society commonly uses), and in particular I want to talk about confidence games in the currency of credit for achievement.

If I were applying for a very important job with great responsibilities, such as President of the United States, CEO of a top corporation, or head or board member of a major AI research institution, I could be expected to have some relevant prior experience. For instance, I might have had some success managing a similar, smaller institution, or serving the same institution in a lesser capacity. More generally, when I make a bid for control over something, I am implicitly claiming that I have enough social credit – enough of a track record – that I can be expected to do good things with that control.

In general, if someone has done a lot, we should expect to see an iceberg pattern where a small easily-visible part suggests a lot of solid but harder-to-verify substance under the surface. One might be tempted to make a habit of imputing a much larger iceberg from the combination of a small floaty bit, and promises. But, a small easily-visible part with claims of a lot of harder-to-see substance is easy to mimic without actually doing the work. As Davies continues:

The Vital Importance of Audit. Emphasised over and over again. Brealey and Myers has a section on this, in which they remind callow students that like backing-up one's computer files, this is a lesson that everyone seems to have to learn the hard way. Basically, it's been shown time and again and again; companies which do not audit completed projects in order to see how accurate the original projections were, tend to get exactly the forecasts and projects that they deserve. Companies which have a culture where there are no consequences for making dishonest forecasts, get the projects they deserve. Companies which allocate blank cheques to management teams with a proven record of failure and mendacity, get what they deserve.

If you can independently put stickers on your own chart, then your chart is no longer reliably tracking something externally verified. If forecasts are not checked and tracked, or forecasters are not consequently held accountable for their forecasts, then there is no reason to believe that assessments of future, ongoing, or past programs are accurate. Adopting a wait-and-see attitude, insisting on audits for actual results (not just predictions) before investing more, will definitely slow down funding for good programs. But without it, most of your funding will go to worthless ones.

Open Philanthropy, OpenAI, and closed validation loops

The Open Philanthropy Project recently announced a $30 million grant to the $1 billion nonprofit AI research organization OpenAI. This is the largest single grant it has ever made. The main point of the grant is to buy influence over OpenAI’s future priorities; Holden Karnofsky, Executive Director of the Open Philanthropy Project, is getting a seat on OpenAI’s board as part of the deal. This marks the second major shift in focus for the Open Philanthropy Project.

The first shift (back when it was just called GiveWell) was from trying to find the best already-existing programs to fund (“passive funding”) to envisioning new programs and working with grantees to make them reality (“active funding”). The new shift is from funding specific programs at all, to trying to take control of programs without any specific plan.

To justify the passive funding stage, all you have to believe is that you can know better than other donors, among existing charities. For active funding, you have to believe that you’re smart enough to evaluate potential programs, just like a charity founder might, and pick ones that will outperform. But buying control implies that you think you’re so much better, that even before you’ve evaluated any programs, if someone’s doing something big, you ought to have a say.

When GiveWell moved from a passive to an active funding strategy, it was relying on the moral credit it had earned for its extensive and well-regarded charity evaluations. The thing that was particularly exciting about GiveWell was that they focused on outcomes and efficiency. They didn't just focus on the size or intensity of the problem a charity was addressing. They didn't just look at financial details like overhead ratios. They asked the question a consequentialist cares about: for a given expenditure of money, how much will this charity be able to improve outcomes?

However, when GiveWell tracks its impact, it does not track objective outcomes at all. It tracks inputs: attention received (in the form of visits to its website) and money moved on the basis of its recommendations. In other words, its estimate of its own impact is based on the level of trust people have placed in it.

So, as GiveWell built out the Open Philanthropy Project, its story was: We promised to do something great. As a result, we were entrusted with a fair amount of attention and money. Therefore, we should be given more responsibility. We represented our behavior as praiseworthy, and as a result people put stickers on our chart. For this reason, we should be advanced stickers against future days of praiseworthy behavior.

Then, as the Open Philanthropy Project explored active funding in more areas, its estimate of its own effectiveness grew. After all, it was funding more speculative, hard-to-measure programs, but a multi-billion-dollar donor, which was largely relying on the Open Philanthropy Project's opinions to assess efficacy (including its own efficacy), continued to trust it.

What is missing here is any objective track record of benefits. What this looks like to me, is a long sort of confidence game – or, using less morally loaded language, a venture with structural reliance on increasing amounts of leverage – in the currency of moral credit.

Version 0: GiveWell and passive funding

First, there was GiveWell. GiveWell’s purpose was to find and vet evidence-backed charities. However, it recognized that charities know their own business best. It wasn’t trying to do better than the charities; it was trying to do better than the typical charity donor, by being more discerning.

GiveWell’s thinking from this phase is exemplified by co-founder Elie Hassenfeld’s Six tips for giving like a pro:

When you give, give cash – no strings attached. You’re just a part-time donor, but the charity you’re supporting does this full-time and staff there probably know a lot more about how to do their job than you do. If you’ve found a charity that you feel is excellent – not just acceptable – then it makes sense to trust the charity to make good decisions about how to spend your money.

GiveWell similarly tried to avoid distorting charities’ behavior. Its job was only to evaluate, not to interfere. To perceive, not to act. To find the best, and buy more of the same.

How did GiveWell assess its effectiveness in this stage? When GiveWell evaluates charities, it estimates their cost-effectiveness in advance. It assesses the program the charity is running, through experimental evidence of the form of randomized controlled trials. GiveWell also audits the charity to make sure they’re actually running the program, and figure out how much it costs as implemented. This is an excellent, evidence-based way to generate a prediction of how much good will be done by moving money to the charity.

As far as I can tell, these predictions are untested.

One of GiveWell’s early top charities was VillageReach, which helped Mozambique with TB immunization logistics. GiveWell estimated that VillageReach could save a life for $1,000. But this charity is no longer recommended. The public page says:

VillageReach (www.villagereach.org) was our top-rated organization for 2009, 2010 and much of 2011 and it has received over $2 million due to GiveWell's recommendation. In late 2011, we removed VillageReach from our top-rated list because we felt its project had limited room for more funding. As of November 2012, we believe that that this project may have room for more funding, but we still prefer our current highest-rated charities above it.

GiveWell reanalyzed the data it based its recommendations on, but hasn’t published an after-the-fact retrospective of long-run results. I asked GiveWell about this by email. The response was that such an assessment was not prioritized because GiveWell had found implementation problems in VillageReach's scale-up work as well as reasons to doubt its original conclusion about the impact of the pilot program. It's unclear to me whether this has caused GiveWell to evaluate charities differently in the future.

I don't think someone looking at GiveWell's page on VillageReach would be likely to reach the conclusion that GiveWell now believes its original recommendation was likely erroneous. GiveWell's impact page continues to count money moved to VillageReach without any mention of the retracted recommendation. If we assume that the point of tracking money moved is to track the benefit of moving money from worse to better uses, then repudiated programs ought to be counted against the total, as costs, rather than towards it.

GiveWell has recommended the Against Malaria Foundation for the last several years as a top charity. AMF distributes long-lasting insecticide-treated bed nets to prevent mosquitos from transmitting malaria to humans. Its evaluation of AMF does not mention any direct evidence, positive or negative, about what happened to malaria rates in the areas where AMF operated. (There is a discussion of the evidence that the bed nets were in fact delivered and used.) In the supplementary information page, however, we are told:

Previously, AMF expected to collect data on malaria case rates from the regions in which it funded LLIN distributions: […] In 2016, AMF shared malaria case rate data […] but we have not prioritized analyzing it closely. AMF believes that this data is not high quality enough to reliably indicate actual trends in malaria case rates, so we do not believe that the fact that AMF collects malaria case rate data is a consideration in AMF’s favor, and do not plan to continue to track AMF's progress in collecting malaria case rate data.

The data was noisy, so they simply stopped checking whether AMF’s bed net distributions do anything about malaria.

If we want to know the size of the improvement made by GiveWell in the developing world, we have their predictions about cost-effectiveness, an audit trail verifying that work was performed, and their direct measurement of how much money people gave because they trusted GiveWell. The predictions on the final target – improved outcomes – have not been tested.

GiveWell is actually doing unusually well as far as major funders go. It sticks to describing things it's actually responsible for. By contrast, the Gates Foundation, in a report to Warren Buffet claiming to describe its impact, simply described overall improvement in the developing world, a very small rhetorical step from claiming credit for 100% of the improvement. GiveWell at least sticks to facts about GiveWell's own effects, and this is to its credit. But, it focuses on costs it has been able to impose, not benefits it has been able to create.

The Centre for Effective Altruism's William MacAskill made a related point back in 2012, though he talked about the lack of any sort of formal outside validation or audit, rather than focusing on empirical validation of outcomes:

As far as I know, GiveWell haven't commissioned a thorough external evaluation of their recommendations. […] This surprises me. Whereas businesses have a natural feedback mechanism, namely profit or loss, research often doesn't, hence the need for peer-review within academia. This concern, when it comes to charity-evaluation, is even greater. If GiveWell's analysis and recommendations had major flaws, or were systematically biased in some way, it would be challenging for outsiders to work this out without a thorough independent evaluation. Fortunately, GiveWell has the resources to, for example, employ two top development economists to each do an independent review of their recommendations and the supporting research. This would make their recommendations more robust at a reasonable cost.

GiveWell's page on self-evaluation says that it discontinued external reviews in August 2013. This page links to an explanation of the decision, which concludes:

We continue to believe that it is important to ensure that our work is subjected to in-depth scrutiny. However, at this time, the scrutiny we’re naturally receiving – combined with the high costs and limited capacity for formal external evaluation – make us inclined to postpone major effort on external evaluation for the time being.

That said,

  • >If someone volunteered to do (or facilitate) formal external evaluation, we’d welcome this and would be happy to prominently post or link to criticism.
  • We do intend eventually to re-institute formal external evaluation.

Four years later, assessing the credibility of this assurance is left as an exercise for the reader.

Version 1: GiveWell Labs and active funding

Then there was GiveWell Labs, later called the Open Philanthropy Project. It looked into more potential philanthropic causes, where the evidence base might not be as cut-and-dried as that for the GiveWell top charities. One thing they learned was that in many areas, there simply weren’t shovel-ready programs ready for funding – a funder has to play a more active role. This shift was described by GiveWell co-founder Holden Karnofsky in his 2013 blog post, Challenges of passive funding:

By “passive funding,” I mean a dynamic in which the funder’s role is to review others’ proposals/ideas/arguments and pick which to fund, and by “active funding,” I mean a dynamic in which the funder’s role is to participate in – or lead – the development of a strategy, and find partners to “implement” it. Active funders, in other words, are participating at some level in “management” of partner organizations, whereas passive funders are merely choosing between plans that other nonprofits have already come up with.

My instinct is generally to try the most “passive” approach that’s feasible. Broadly speaking, it seems that a good partner organization will generally know their field and environment better than we do and therefore be best positioned to design strategy; in addition, I’d expect a project to go better when its implementer has fully bought into the plan as opposed to carrying out what the funder wants. However, (a) this philosophy seems to contrast heavily with how most existing major funders operate; (b) I’ve seen multiple reasons to believe the “active” approach may have more relative merits than we had originally anticipated. […]

  • In the nonprofit world of today, it seems to us that funder interests are major drivers of which ideas that get proposed and fleshed out, and therefore, as a funder, it’s important to express interests rather than trying to be fully “passive.”
  • While we still wish to err on the side of being as “passive” as possible, we are recognizing the importance of clearly articulating our values/strategy, and also recognizing that an area can be underfunded even if we can’t easily find shovel-ready funding opportunities in it.

GiveWell earned some credibility from its novel, evidence-based outcome-oriented approach to charity evaluation. But this credibility was already – and still is – a sort of loan. We have GiveWell's predictions or promises of cost effectiveness in terms of outcomes, and we have figures for money moved, from which we can infer how much we were promised in improved outcomes. As far as I know, no one's gone back and checked whether those promises turned out to be true.

In the meantime, GiveWell then leveraged this credibility by extending its methods into more speculative domains, where less was checkable, and donors had to put more trust in the subjective judgment of GiveWell analysts. This was called GiveWell Labs. At the time, this sort of compounded leverage may have been sensible, but it's important to track whether a debt has been paid off or merely rolled over.

Version 2: The Open Philanthropy Project and control-seeking

Finally, the Open Philanthropy made its largest-ever single grant to purchase its founder a seat on a major organization’s board. This represents a transition from mere active funding to overtly purchasing influence:

The Open Philanthropy Project awarded a grant of $30 million ($10 million per year for 3 years) in general support to OpenAI. This grant initiates a partnership between the Open Philanthropy Project and OpenAI, in which Holden Karnofsky (Open Philanthropy’s Executive Director, “Holden” throughout this page) will join OpenAI’s Board of Directors and, jointly with one other Board member, oversee OpenAI’s safety and governance work.

We expect the primary benefits of this grant to stem from our partnership with OpenAI, rather than simply from contributing funding toward OpenAI’s work. While we would also expect general support for OpenAI to be likely beneficial on its own, the case for this grant hinges on the benefits we anticipate from our partnership, particularly the opportunity to help play a role in OpenAI’s approach to safety and governance issues.

Clearly the value proposition is not increasing available funds for OpenAI, if OpenAI’s founders’ billion-dollar commitment to it is real:

Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI. In total, these funders have committed $1 billion, although we expect to only spend a tiny fraction of this in the next few years.

The Open Philanthropy Project is neither using this money to fund programs that have a track record of working, nor to fund a specific program that it has prior reason to expect will do good. Rather, it is buying control, in the hope that Holden will be able to persuade OpenAI not to destroy the world, because he knows better than OpenAI’s founders.

How does the Open Philanthropy Project know that Holden knows better? Well, it’s done some active funding of programs it expects to work out. It expects those programs to work out because they were approved by a process similar to the one used by GiveWell to find charities that it expects to save lives.

If you want to acquire control over something, that implies that you think you can manage it more sensibly than whoever is in control already. Thus, buying control is a claim to have superior judgment - not just over others funding things (the original GiveWell pitch), but over those being funded.

In a footnote to the very post announcing the grant, the Open Philanthropy Project notes that it has historically tried to avoid acquiring leverage over organizations it supports, precisely because it’s not sure it knows better:

For now, we note that providing a high proportion of an organization’s funding may cause it to be dependent on us and accountable primarily to us. This may mean that we come to be seen as more responsible for its actions than we want to be; it can also mean we have to choose between providing bad and possibly distortive guidance/feedback (unbalanced by other stakeholders’ guidance/feedback) and leaving the organization with essentially no accountability.

This seems to describe two main problems introduced by becoming a dominant funder:

  1. People might accurately attribute causal responsibility for some of the organization's conduct to the Open Philanthropy Project.
  2. The Open Philanthropy Project might influence the organization to behave differently than it otherwise would.

The first seems obviously silly. I've been trying to correct the imbalance where Open Phil is criticized mainly when it makes grants, by criticizing it for holding onto too much money.

The second really is a cost as well as a benefit, and the Open Philanthropy Project has been absolutely correct to recognize this. This is the sort of thing GiveWell has consistently gotten right since the beginning and it deserves credit for making this principle clear and – until now – living up to it.

But discomfort with being dominant funders seems inconsistent with buying a board seat to influence OpenAI. If the Open Philanthropy Project thinks that Holden’s judgment is good enough that he should be in control, why only here? If he thinks that other Open Philanthropy Project AI safety grantees have good judgment but OpenAI doesn’t, why not give them similar amounts of money free of strings to spend at their discretion and see what happens? Why not buy people like Eliezer Yudkowsky, Nick Bostrom, or Stuart Russell a seat on OpenAI’s board?

On the other hand, the Open Philanthropy Project is right on the merits here with respect to safe superintelligence development. Openness makes sense for weak AI, but if you’re building true strong AI you want to make sure you’re cooperating with all the other teams in a single closed effort. I agree with the Open Philanthropy Project’s assessment of the relevant risks. But it's not clear to me how often joining the bad guys to prevent their worst excesses is a good strategy, and it seems like it has to often be a mistake. Still, I’m mindful of heroes like John RabeChiune Sugihara, and Oscar Schindler. And if I think someone has a good idea for improving things, it makes sense to reallocate control from people who have worse ideas, even if there's some potential better allocation.

On the other hand, is Holden Karnofsky the right person to do this? The case is mixed.

He listens to and engages with the arguments from principled advocates for AI safety research, such as Nick Bostrom, Eliezer Yudkowsky, and Stuart Russell. This is a point in his favor. But, I can think of other people who engage with such arguments. For instance, OpenAI founder Elon Musk has publicly praised Bostrom’s book Superintelligence, and founder Sam Altman has written two blog posts summarizing concerns about AI safety reasonably cogently. Altman even asked Luke Muehlhauser, former executive director of MIRI, for feedback pre-publication. He's met with Nick Bostrom. That suggests a substantial level of direct engagement with the field, although Holden has engaged for a longer time, more extensively, and more directly.

Another point in Holden’s favor, from my perspective, is that under his leadership, the Open Philanthropy Project has funded the most serious-seeming programs for both weak and strong AI safety research. But Musk also managed to (indirectly) fund AI safety research at MIRI and by Nick Bostrom personally, via his $10 million FLI grant.

The Open Philanthropy Project also says that it expects to learn a lot about AI research from this, which will help it make better decisions on AI risk in the future and influence the field in the right way. This is reasonable as far as it goes. But remember that the case for positioning the Open Philanthropy Project to do this relies on the assumption that the Open Philanthropy Project will improve matters by becoming a central influencer in this field. This move is consistent with reaching that goal, but it is not independent evidence that the goal is the right one.

Overall, there are good narrow reasons to think that this is a potential improvement over the prior situation around OpenAI – but only a small and ill-defined improvement, at considerable attentional cost, and with the offsetting potential harm of increasing OpenAI's perceived legitimacy as a long-run AI safety organization.

And it’s worrying that Open Philanthropy Project’s largest grant – not just for AI risk, but ever (aside from GiveWell Top Charity funding) – is being made to an organization at which Holden’s housemate and future brother-in-law is a leading researcher. The nepotism argument is not my central objection. If I otherwise thought the grant were obviously a good idea, it wouldn’t worry me, because it’s natural for people with shared values and outlooks to become close nonprofessionally as well. But in the absence of a clear compelling specific case for the grant, it’s worrying.

Altogether, I'm not saying this is an unreasonable shift, considered in isolation. I’m not even sure this is a bad thing for the Open Philanthropy Project to be doing – insiders may have information that I don’t, and that is difficult to communicate to outsiders. But as outsiders, there comes a point when someone’s maxed out their moral credit, and we should wait for results before actively trying to entrust the Open Philanthropy Project and its staff with more responsibility.

EA Funds and self-recommendation

The Centre for Effective Altruism is actively trying to entrust the Open Philanthropy Project and its staff with more responsibility.

The concerns of CEA’s CEO William MacAskill about GiveWell have, as far as I can tell, never been addressed, and the underlying issues have only become more acute. But CEA is now working to put more money under the control of Open Philanthropy Project staff, through its new EA Funds product – a way for supporters to delegate giving decisions to expert EA “fund managers” by giving to one of four funds: Global Health and DevelopmentAnimal WelfareLong-Term Future, and Effective Altruism Community.

The Effective Altruism movement began by saying that because very poor people exist, we should reallocate money from ordinary people in the developed world to the global poor. Now the pitch is in effect that because very poor people exist, we should reallocate money from ordinary people in the developed world to the extremely wealthy. This is a strange and surprising place to end up, and it’s worth retracing our steps. Again, I find it easiest to think of three stages:

  1. Money can go much farther in the developing world. Here, we’ve found some examples for you. As a result, you can do a huge amount of good by giving away a large share of your income, so you ought to.
  2. We’ve found ways for you to do a huge amount of good by giving away a large share of your income for developing-world interventions, so you ought to trust our recommendations. You ought to give a large share of your income to these weird things our friends are doing that are even better, or join our friends.
  3. We’ve found ways for you to do a huge amount of good by funding weird things our friends are doing, so you ought to trust the people we trust. You ought to give a large share of your income to a multi-billion-dollar foundation that funds such things.

Stage 1: The direct pitch

At first, Giving What We Can (the organization that eventually became CEA) had a simple, easy to understand pitch:

Giving What We Can is the brainchild of Toby Ord, a philosopher at Balliol College, Oxford. Inspired by the ideas of ethicists Peter Singer and Thomas Pogge, Toby decided in 2009 to commit a large proportion of his income to charities that effectively alleviate poverty in the developing world.

[…]

Discovering that many of his friends and colleagues were interested in making a similar pledge, Toby worked with fellow Oxford philosopher Will MacAskill to create an international organization of people who would donate a significant proportion of their income to cost-effective charities.

Giving What We Can launched in November 2009, attracting significant media attention. Within a year, 64 people had joined the society, their pledged donations amounting to $21 million. Initially run on a volunteer basis, Giving What We Can took on full-time staff in the summer of 2012.

In effect, its argument was: "Look, you can do huge amounts of good by giving to people in the developing world. Here are some examples of charities that do that. It seems like a great idea to give 10% of our income to those charities."

GWWC was a simple product, with a clear, limited scope. Its founders believed that people, including them, ought to do a thing – so they argued directly for that thing, using the arguments that had persuaded them. If it wasn't for you, it was easy to figure that out; but a surprisingly large number of people were persuaded by a simple, direct statement of the argument, took the pledge, and gave a lot of money to charities helping the world's poorest.

Stage 2: Rhetoric and belief diverge

Then, GWWC staff were persuaded you could do even more good with your money in areas other than developing-world charity, such as existential risk mitigation. Encouraging donations and work in these areas became part of the broader Effective Altruism movement, and GWWC's umbrella organization was named the Centre for Effective Altruism. So far, so good.

But this left Effective Altruism in an awkward position; while leadership often personally believe the most effective way to do good is far-future stuff or similarly weird-sounding things, many people who can see the merits of the developing-world charity argument reject the argument that because the vast majority of people live in the far future, even a very small improvement in humanity’s long-run prospects outweighs huge improvements on the global poverty front. They also often reject similar scope-sensitive arguments for things like animal charities.

Giving What We Can's page on what we can achieve still focuses on global poverty, because developing-world charity is easier to explain persuasively. However, EA leadership tends to privately focus on things like AI risk. Two years ago many attendees at the EA Global conference in the San Francisco Bay Area were surprised that the conference focused so heavily on AI risk, rather than the global poverty interventions they’d expected.

Stage 3: Effective altruism is self-recommending

Shortly before the launch of the EA Funds I was told in informal conversations that they were a response to demand. Giving What We Can pledge-takers and other EA donors had told CEA that they trusted it to GWWC pledge-taker demand. CEA was responding by creating a product for the people who wanted it.

This seemed pretty reasonable to me, and on the whole good. If someone wants to trust you with their money, and you think you can do something good with it, you might as well take it, because they’re estimating your skill above theirs. But not everyone agrees, and as the Madoff case demonstrates, "people are begging me to take their money" is not a definitive argument that you are doing anything real.

In practice, the funds are managed by Open Philanthropy Project staff:

We want to keep this idea as simple as possible to begin with, so we’ll have just four funds, with the following managers:

  • Global Health and Development - Elie Hassenfeld
  • Animal Welfare – Lewis Bollard
  • Long-run future – Nick Beckstead
  • Movement-building – Nick Beckstead

(Note that the meta-charity fund will be able to fund CEA; and note that Nick Beckstead is a Trustee of CEA. The long-run future fund and the meta-charity fund continue the work that Nick has been doing running the EA Giving Fund.)

It’s not a coincidence that all the fund managers work for GiveWell or Open Philanthropy.  First, these are the organisations whose charity evaluation we respect the most. The worst-case scenario, where your donation just adds to the Open Philanthropy funding within a particular area, is therefore still a great outcome.  Second, they have the best information available about what grants Open Philanthropy are planning to make, so have a good understanding of where the remaining funding gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is important, but isn’t currently addressed by Open Philanthropy.

In past years, Giving What We Can recommendations have largely overlapped with GiveWell’s top charities.

In the comments on the launch announcement on the EA Forum, several people (including me) pointed out that the Open Philanthropy Project seems to be having trouble giving away even the money it already has, so it seems odd to direct more money to Open Philanthropy Project decisionmakers. CEA’s senior marketing manager replied that the Funds were a minimum viable product to test the concept:

I don't think the long-term goal is that OpenPhil program officers are the only fund managers. Working with them was the best way to get an MVP version in place.

This also seemed okay to me, and I said so at the time.

[NOTE: I've edited the next paragraph to excise some unreliable information. Sorry for the error, and thanks to Rob Wiblin for pointing it out.]

After they were launched, though, I saw phrasings that were not so cautious at all, instead making claims that this was generally a better way to give. As of writing this, if someone on the effectivealtruism.org website clicks on "Donate Effectively" they will be led directly to a page promoting EA Funds. When I looked at Giving What We Can’s top charities page in early April, it recommended the EA Funds "as the highest impact option for donors."

This is not a response to demand, it is an attempt to create demand by using CEA's authority, telling people that the funds are better than what they're doing already. By contrast, GiveWell's Top Charities page simply says:

Our top charities are evidence-backed, thoroughly vetted, underfunded organizations.

This carefully avoids any overt claim that they're the highest-impact option available to donors. GiveWell avoids saying that because there's no way they could know it, so saying it wouldn't be truthful.

A marketing email might have just been dashed off quickly, and an exaggerated wording might just have been an oversight. But when I looked at Giving What We Can’s top charities page in early April, it recommended the EA Funds "as the highest impact option for donors."

The wording has since been qualified with “for most donors”, which is a good change. But the thing I’m worried about isn’t just the explicit exaggerated claims – it’s the underlying marketing mindset that made them seem like a good idea in the first place. EA seems to have switched from an endorsement of the best things outside itself, to an endorsement of itself. And it's concentrating decisionmaking power in the Open Philanthropy Project.

Effective altruism is overextended, but it doesn't have to be

There is a saying in finance, that was old even back when Keynes said it. If you owe the bank a million dollars, then you have a problem. If you owe the bank a billion dollars, then the bank has a problem.

In other words, if someone extends you a level of trust they could survive writing off, then they might call in that loan. As a result, they have leverage over you. But if they overextend, putting all their eggs in one basket, and you are that basket, then you have leverage over them; you're too big to fail. Letting you fail would be so disastrous for their interests that you can extract nearly arbitrary concessions from them, including further investment. For this reason, successful institutions often try to diversify their investments, and avoid overextending themselves. Regulators, for the same reason, try to prevent banks from becoming "too big to fail."

The Effective Altruism movement is concentrating decisionmaking power and trust as much as possible, in a way that's setting itself up to invest ever increasing amounts of confidence to keep the game going.

The alternative is to keep the scope of each organization narrow, overtly ask for trust for each venture separately, and make it clear what sorts of programs are being funded. For instance, Giving What We Can should go back to its initial focus of global poverty relief.

Like many EA leaders, I happen to believe that anything you can do to steer the far future in a better direction is much, much more consequential for the well-being of sentient creatures than any purely short-run improvement you can create now. So it might seem odd that I think Giving What We Can should stay focused on global poverty. But, I believe that the single most important thing we can do to improve the far future is hold onto our ability to accurately build shared models. If we use bait-and-switch tactics, we are actively eroding the most important type of capital we have – coordination capacity.

If you do not think giving 10% of one's income to global poverty charities is the right thing to do, then you can't in full integrity urge others to do it – so you should stop. You might still believe that GWWC ought to exist. You might still believe that it is a positive good to encourage people to give much of their income to help the global poor, if they wouldn't have been doing anything else especially effective with the money. If so, and you happen to find yourself in charge of an organization like Giving What We Can, the thing to do is write a letter to GWWC members telling them that you've changed your mind, and why, and offering to give away the brand to whoever seems best able to honestly maintain it.

If someone at the Centre for Effective Altruism fully believes in GWWC's original mission, then that might make the transition easier. If not, then one still has to tell the truth and do what's right.

And what of the EA Funds? The Long-Term Future Fund is run by Open Philanthropy Project Program Officer Nick Beckstead. If you think that it's a good thing to delegate giving decisions to Nick, then I would agree with you. Nick's a great guy! I'm always happy to see him when he shows up at house parties. He's smart, and he actively seeks out arguments against his current point of view. But the right thing to do, if you want to persuade people to delegate their giving decisions to Nick Beckstead, is to make a principled case for delegating giving decisions to Nick Beckstead. If the Centre for Effective Altruism did that, then Nick would almost certainly feel more free to allocate funds to the best things he knows about, not just the best things he suspects EA Funds donors would be able to understand and agree with.

If you can't directly persuade people, then maybe you're wrong. If the problem is inferential distance, then you've got some work to do bridging that gap.

There's nothing wrong with setting up a fund to make it easy. It's actually a really good idea. But there is something wrong with the multiple layers of vague indirection involved in the current marketing of the Far Future fund – using global poverty to sell the generic idea of doing the most good, then using CEA's identity as the organization in charge of doing the most good to persuade people to delegate their giving decisions to it, and then sending their money to some dude at the multi-billion-dollar foundation to give away at his personal discretion. The same argument applies to all four Funds.

Likewise, if you think that working directly on AI risk is the most important thing, then you should make arguments directly for working on AI risk. If you can't directly persuade people, then maybe you're wrong. If the problem is inferential distance, it might make sense to imitate the example of someone like Eliezer Yudkowsky, who used indirect methods to bridge the inferential gap by writing extensively on individual human rationality, and did not try to control others' actions in the meantime.

If Holden thinks he should be in charge of some AI safety research, then he should ask Good Ventures for funds to actually start an AI safety research organization. I'd be excited to see what he'd come up with if he had full control of and responsibility for such an organization. But I don't think anyone has a good plan to work directly on AI risk, and I don't have one either, which is why I'm not directly working on it or funding it. My plan for improving the far future is to build human coordination capacity.

(If, by contrast, Holden just thinks there needs to be coordination between different AI safety organizations, the obvious thing to do would be to work with FLI on that, e.g. by giving them enough money to throw their weight around as a funder. They organized the successful Puerto Rico conference, after all.)

Another thing that would be encouraging would be if at least one of the Funds were not administered entirely by an Open Philanthropy Project staffer, and ideally an expert who doesn't benefit from the halo of "being an EA." For instance, Chris Blattman is a development economist with experience designing programs that don't just use but generate evidence on what works. When people were arguing about whether sweatshops are good or bad for the global poor, he actually went and looked by performing a randomized controlled trial. He's leading two new initiatives with J-PAL and IPA, and expects that directors designing studies will also have to spend time fundraising. Having funding lined up seems like the sort of thing that would let them spend more time actually running programs. And more generally, he seems likely to know about funding opportunities the Open Philanthropy Project doesn't, simply because he's embedded in a slightly different part of the global health and development network.

Narrower projects that rely less on the EA brand and more on what they're actually doing, and more cooperation on equal terms with outsiders who seem to be doing something good already, would do a lot to help EA grow beyond putting stickers on its own behavior chart. I'd like to see EA grow up. I'd be excited to see what it might do.

Summary

  1. Good programs don't need to distort the story people tell about them, while bad programs do.
  2. Moral confidence games – treating past promises and trust as a track record to justify more trust – are an example of the kind of distortion mentioned in (1), that benefits bad programs more than good ones.
  3. The Open Philanthropy Project's Open AI grant represents a shift from evaluating other programs' effectiveness, to assuming its own effectiveness.
  4. EA Funds represents a shift from EA evaluating programs' effectiveness, to assuming EA's effectiveness.
  5. A shift from evaluating other programs' effectiveness, to assuming one's own effectiveness, is an example of the kind of "moral confidence game" mentioned in (2).
  6. EA ought to focus on scope-limited projects, so that it can directly make the case for those particular projects instead of relying on EA identity as a reason to support an EA organization.
  7. EA organizations ought to entrust more responsibility to outsiders who seem to be doing good things but don't overtly identify as EA, instead of trying to keep it all in the family.
(Cross-posted at my personal blog and the EA Forum.

Disclosure: I know many people involved at many of the organizations discussed, and I used to work for GiveWell. I have no current institutional affiliation to any of them. Everyone mentioned has always been nice to me and I have no personal complaints.)

Dragon Army: Theory & Charter (30min read)

40 Duncan_Sabien 25 May 2017 09:07PM

Author's note: This IS a rationality post (specifically, theorizing on group rationality and autocracy/authoritarianism), but the content is quite cunningly disguised beneath a lot of meandering about the surface details of a group house charter.  If you're not at least hypothetically interested in reading about the workings of an unusual group house full of rationalists in Berkeley, you can stop here.  


Section 0 of 3: Preamble

Purpose of post:  Threefold.  First, a lot of rationalists live in group houses, and I believe I have some interesting models and perspectives, and I want to make my thinking available to anyone else who's interested in skimming through it for Things To Steal.  Second, since my initial proposal to found a house, I've noticed a significant amount of well-meaning pushback and concern à la have you noticed the skulls? and it's entirely unfair for me to expect that to stop unless I make my skull-noticing evident.  Third, some nonzero number of humans are gonna need to sign the final version of this charter if the house is to come into existence, and it has to be viewable somewhere.  I figured the best place was somewhere that impartial clear thinkers could weigh in (flattery).

What is Dragon Army [Barracks]?  It's a high-commitment, high-standards, high-investment group house model with centralized leadership and an up-or-out participation norm, designed to a) improve its members and b) actually accomplish medium-to-large scale tasks requiring long-term coordination.  Tongue-in-cheek referred to as the "fascist/authoritarian take on rationalist housing," which has no doubt contributed to my being vulnerable to strawmanning but was nevertheless the correct joke to be making, lest people misunderstand what they were signing up for.  Aesthetically modeled after Dragon Army from Ender's Game (not HPMOR), with a touch of Paper Street Soap Company thrown in, with Duncan Sabien in the role of Ender/Tyler and Eli Tyre in the role of Bean/The Narrator.

Why?  Current group housing/attempts at group rationality and community-supported leveling up seem to me to be falling short in a number of ways.  First, there's not enough stuff actually happening in them (i.e. to the extent people are growing and improving and accomplishing ambitious projects, it's largely within their professional orgs or fueled by unusually agenty individuals, and not by leveraging the low-hanging fruit available in our house environments).  Second, even the group houses seem to be plagued by the same sense of unanchored abandoned loneliness that's hitting the rationalist community specifically and the millennial generation more generally.  There are a bunch of competitors for "third," but for now we can leave it at that.

"You are who you practice being."


Section 1 of 3: Underlying models

The following will be meandering and long-winded; apologies in advance.  In short, both the house's proposed aesthetic and the impulse to found it in the first place were not well-reasoned from first principles—rather, they emerged from a set of System 1 intuitions which have proven sound/trustworthy in multiple arenas and which are based on experience in a variety of domains.  This section is an attempt to unpack and explain those intuitions post-hoc, by holding plausible explanations up against felt senses and checking to see what resonates.

Problem 1: Pendulums

This one's first because it informs and underlies a lot of my other assumptions.  Essentially, the claim here is that most social progress can be modeled as a pendulum oscillating decreasingly far from an ideal.  The society is "stuck" at one point, realizes that there's something wrong about that point (e.g. that maybe we shouldn't be forcing people to live out their entire lives in marriages that they entered into with imperfect information when they were like sixteen), and then moves to correct that specific problem, often breaking some other Chesterton's fence in the process.


For example, my experience leads me to put a lot of confidence behind the claim that we've traded "a lot of people trapped in marriages that are net bad for them" for "a lot of people who never reap the benefits of what would've been a strongly net-positive marriage, because it ended too easily too early on."  The latter problem is clearly smaller, and is probably a better problem to have as an individual, but it's nevertheless clear (to me, anyway) that the loosening of the absoluteness of marriage had negative effects in addition to its positive ones.

Proposed solution: Rather than choosing between absolutes, integrate.  For example, I have two close colleagues/allies who share millennials' default skepticism of lifelong marriage, but they also are skeptical that a commitment-free lifestyle is costlessly good.  So they've decided to do handfasting, in which they're fully committed for a year and a day at a time, and there's a known period of time for asking the question "should we stick together for another round?"

In this way, I posit, you can get the strengths of the old socially evolved norm which stood the test of time, while also avoiding the majority of its known failure modes.  Sort of like building a gate into the Chesterton's fence, instead of knocking it down—do the old thing in time-boxed iterations with regular strategic check-ins, rather than assuming you can invent a new thing from whole cloth.

Caveat/skull: Of course, the assumption here is that the Old Way Of Doing Things is not a slippery slope trap, and that you can in fact avoid the failure modes simply by trying.  And there are plenty of examples of that not working, which is why Taking Time-Boxed Experiments And Strategic Check-Ins Seriously is a must.  In particular, when attempting to strike such a balance, all parties must have common knowledge agreement about which side of the ideal to err toward (e.g. innocents in prison, or guilty parties walking free?).

 

Problem 2: The Unpleasant Valley

As far as I can tell, it's pretty uncontroversial to claim that humans are systems with a lot of inertia.  Status quo bias is well researched, past behavior is the best predictor of future behavior, most people fail at resolutions, etc.

I have some unqualified speculation regarding what's going on under the hood.  For one, I suspect that you'll often find humans behaving pretty much as an effort- and energy-conserving algorithm would behave.  People have optimized their most known and familiar processes at least somewhat, which means that it requires less oomph to just keep doing what you're doing than to cobble together a new system.  For another, I think hyperbolic discounting gets way too little credit/attention, and is a major factor in knocking people off the wagon when they're trying to forego local behaviors that are known to be intrinsically rewarding for local behaviors that add up to long-term cumulative gain.

But in short, I think the picture of "I'm going to try something new, eh?" often looks like this:


... with an "unpleasant valley" some time after the start point.  Think about the cold feet you get after the "honeymoon period" has worn off, or the desires and opinions of a military recruit in the second week of a six-week boot camp, or the frustration that emerges two months into a new diet/exercise regime, or your second year of being forced to take piano lessons.

The problem is, people never make it to the third year, where they're actually good at piano, and start reaping the benefits, and their System 1 updates to yeah, okay, this is in fact worth it.  Or rather, they sometimes make it, if there are strong supportive structures to get them across the unpleasant valley (e.g. in a military bootcamp, they just ... make you keep going).  But left to our own devices, we'll often get halfway through an experiment and just ... stop, without ever finding out what the far side is actually like.

Proposed solution: Make experiments "unquittable."  The idea here is that (ideally) one would not enter into a new experiment unless a) one were highly confident that one could absorb the costs, if things go badly, and b) one were reasonably confident that there was an Actually Good Thing waiting at the finish line.  If (big if) we take those as a given, then it should be safe to, in essence, "lock oneself in," via any number of commitment mechanisms.  Or, to put it in other words: "Medium-Term Future Me is going to lose perspective and want to give up because of being unable to see past short-term unpleasantness to the juicy, long-term goal?  Fine, then—Medium-Term Future Me doesn't get a vote."  Instead, Post-Experiment Future Me gets the vote, including getting to update heuristics on which-kinds-of-experiments-are-worth-entering.

Caveat/skull: People who are bad at self-modeling end up foolishly locking themselves into things that are higher-cost or lower-EV than they thought, and getting burned; black swans and tail risk ends up making even good bets turn out very very badly; we really should've built in an ejector seat.  This risk can be mostly ameliorated by starting small and giving people a chance to calibrate—you don't make white belts try to punch through concrete blocks, you make them punch soft, pillowy targets first.

And, of course, you do build in an ejector seat.  See next.

 

Problem 3: Saving Face

If any of you have been to a martial arts academy in the United States, you're probably familiar with the norm whereby a tardy student purchases entry into the class by first doing some pushups.  The standard explanation here is that the student is doing the pushups not as a punishment, but rather as a sign of respect for the instructor, the other students, and the academy as a whole.

I posit that what's actually going on includes that, but is somewhat more subtle/complex.  I think the real benefit of the pushup system is that it closes the loop.  

Imagine you're a ten year old kid, and your parent picked you up late from school, and you're stuck in traffic on your way to the dojo.  You're sitting there, jittering, wondering whether you're going to get yelled at, wondering whether the master or the other students will think you're lazy, imagining stuttering as you try to explain that it wasn't your fault—

Nope, none of that.  Because it's already clearly established that if you fail to show up on time, you do some pushups, and then it's over.  Done.  Finished.  Like somebody sneezed and somebody else said "bless you," and now we can all move on with our lives.  Doing the pushups creates common knowledge around the questions "does this person know what they did wrong?" and "do we still have faith in their core character?"  You take your lumps, everyone sees you taking your lumps, and there's no dangling suspicion that you were just being lazy, or that other people are secretly judging you.  You've paid the price in public, and everyone knows it, and this is a good thing.

Proposed solution: This is a solution without a concrete problem, since I haven't yet actually outlined the specific commitments a Dragon has to make (regarding things like showing up on time, participating in group activities, and making personal progress).  But in essence, the solution is this: you have to build into your system from the beginning a set of ways-to-regain-face.  Ways to hit the ejector seat on an experiment that's going screwy without losing all social standing; ways to absorb the occasional misstep or failure-to-adequately-plan; ways to be less-than-perfect and still maintain the integrity of a system that's geared toward focusing everyone on perfection.  In short, people have to know (and others have to know that they know, and they have to know that others know that they know) exactly how to make amends to the social fabric, in cases where things go awry, so that there's no question about whether they're trying to make amends, or whether that attempt is sufficient.  


Caveat/skull: The obvious problem is people attempting to game the system—they notice that ten pushups is way easier than doing the diligent work required to show up on time 95 times out of 100.  The next obvious problem is that the price is set too low for the group, leaving them to still feel jilted or wronged, and the next obvious problem is that the price is set too high for the individual, leaving them to feel unfairly judged or punished (the fun part is when both of those are true at the same time).  Lastly, there's something in the mix about arbitrariness—what do pushups have to do with lateness, really?  I mean, I get that it's paying some kind of unpleasant cost, but ...


Problem 4: Defections & Compounded Interest

I'm pretty sure everyone's tired of hearing about one-boxing and iterated prisoners' dilemmas, so I'm going to move through this one fairly quickly even though it could be its own whole multipage post.  In essence, the problem is that any rate of tolerance of real defection (i.e. unmitigated by the social loop-closing norms above) ultimately results in the destruction of the system.  Another way to put this is that people underestimate by a couple of orders of magnitude the corrosive impact of their defections—we often convince ourselves that 90% or 99% is good enough, when in fact what's needed is something like 99.99%.

There's something good that happens if you put a little bit of money away with every paycheck, and it vanishes or is severely curtailed once you stop, or start skipping a month here and there.  Similarly, there's something good that happens when a group of people agree to meet in the same place at the same time without fail, and it vanishes or is severely curtailed once one person skips twice.

In my work at the Center for Applied Rationality, I frequently tell my colleagues and volunteers "if you're 95% reliable, that means I can't rely on you."  That's because I'm in a context where "rely" means really trust that it'll get done.  No, really.  No, I don't care what comes up, DID YOU DO THE THING?  And if the answer is "Yeah, 19 times out of 20," then I can't give that person tasks ever again, because we run more than 20 workshops and I can't have one of them catastrophically fail.

(I mean, I could.  It probably wouldn't be the end of the world.  But that's exactly the point—I'm trying to create a pocket universe in which certain things, like "the CFAR workshop will go well," are absolutely reliable, and the "absolute" part is important.)

As far as I can tell, it's hyperbolic discounting all over again—the person who wants to skip out on the meetup sees all of these immediate, local costs to attending, and all of these visceral, large gains to defection, and their S1 doesn't properly weight the impact to those distant, cumulative effects (just like the person who's going to end up with no retirement savings because they wanted those new shoes this month instead of next month).  1.01^n takes a long time to look like it's going anywhere, and in the meantime the quick one-time payoff of 1.1 that you get by knocking everything else down to .99^n looks juicy and delicious and seems justified.

But something magical does accrue when you make the jump from 99% to 100%.  That's when you see teams that truly trust and rely on one another, or marriages built on unshakeable faith (and you see what those teams and partnerships can build, when they can adopt time horizons of years or decades rather than desperately hoping nobody will bail after the third meeting).  It starts with a common knowledge understanding that yes, this is the priority, even—no, wait, especially—when it seems like there are seductively convincing arguments for it to not be.  When you know—not hope, but know—that you will make a local sacrifice for the long-term good, and you know that they will, too, and you all know that you all know this, both about yourselves and about each other.

Proposed solution: Discuss, and then agree upon, and then rigidly and rigorously enforce a norm of perfection in all formal undertakings (and, correspondingly, be more careful and more conservative about which undertakings you officially take on, versus which things you're just casually trying out as an informal experiment), with said norm to be modified/iterated only during predecided strategic check-in points and not on the fly, in the middle of things.  Build a habit of clearly distinguishing targets you're going to hit from targets you'd be happy to hit.  Agree upon and uphold surprisingly high costs for defection, Hofstadter style, recognizing that a cost that feels high enough probably isn't.  Leave people wiggle room as in Problem 3, but define that wiggle room extremely concretely and objectively, so that it's clear in advance when a line is about to be crossed.  Be ridiculously nitpicky and anal about supporting standards that don't seem worth supporting, in the moment, if they're in arenas that you've previously assessed as susceptible to compounding.  Be ruthless about discarding standards during strategic review; if a member of the group says that X or Y or Z is too high-cost for them to sustain, believe them, and make decisions accordingly.

Caveat/skull: Obviously, because we're humans, even people who reflectively endorse such an overall solution will chafe when it comes time for them to pay the price (I certainly know I've chafed under standards I fought to install).  At that point, things will seem arbitrary and overly constraining, priorities will seem misaligned (and might actually be), and then feelings will be hurt and accusations will be leveled and things will be rough.  The solution there is to have, already in place, strong and open channels of communication, strong norms and scaffolds for emotional support, strong default assumption of trust and good intent on all sides, etc. etc.  This goes wrongest when things fester and people feel they can't speak up; it goes much better if people have channels to lodge their complaints and reservations and are actively incentivized to do so (and can do so without being accused of defecting on the norm-in-question; criticism =/= attack).

 

Problem 5: Everything else

There are other models and problems in the mix—for instance, I have a model surrounding buy-in and commitment that deals with an escalating cycle of asks-and-rewards, or a model of how to effectively leverage a group around you to accomplish ambitious tasks that requires you to first lay down some "topsoil" of simple/trivial/arbitrary activities that starts the growth of an ecology of affordances, or a theory that the strategy of trying things and doing things outstrips the strategy of think-until-you-identify-worthwhile-action, and that rationalists in particular are crippling themselves through decision paralysis/letting the perfect be the enemy of the good when just doing vaguely interesting projects would ultimately gain them more skill and get them further ahead, or a strong sense based off both research and personal experience that physical proximity matters, and that you can't build the correct kind of strength and flexibility and trust into your relationships without actually spending significant amounts of time with one another in meatspace on a regular basis, regardless of whether that makes tactical sense given your object-level projects and goals.

But I'm going to hold off on going into those in detail until people insist on hearing about them or ask questions/pose hesitations that could be answered by them.


Section 2 of 3: Power dynamics

All of the above was meant to point at reasons why I suspect trusting individuals responding to incentives moment-by-moment to be a weaker and less effective strategy than building an intentional community that Actually Asks Things Of Its Members.  It was also meant to justify, at least indirectly, why a strong guiding hand might be necessary given that our community's evolved norms haven't really produced results (in the group houses) commensurate with the promises of EA and rationality.

Ultimately, though, what matters is not the problems and solutions themselves so much as the light they shine on my aesthetics (since, in the actual house, it's those aesthetics that will be used to resolve epistemic gridlock).  In other words, it's not so much those arguments as it is the fact that Duncan finds those arguments compelling.  It's worth noting that the people most closely involved with this project (i.e. my closest advisors and those most likely to actually sign on as housemates) have been encouraged to spend a significant amount of time explicitly vetting me with regards to questions like "does this guy actually think things through," "is this guy likely to be stupid or meta-stupid," "will this guy listen/react/update/pivot in response to evidence or consensus opposition," and "when this guy has intuitions that he can't explain, do they tend to be validated in the end?"

In other words, it's fair to view this whole post as an attempt to prove general trustworthiness (in both domain expertise and overall sanity), because—well—that's what it is.  In milieu like the military, authority figures expect (and get) obedience irrespective of whether or not they've earned their underlings' trust; rationalists tend to have a much higher bar before they're willing to subordinate their decisionmaking processes, yet still that's something this sort of model requires of its members (at least from time to time, in some domains, in a preliminary "try things with benefit of the doubt" sort of way).  I posit that Dragon Army Barracks works (where "works" means "is good and produces both individual and collective results that outstrip other group houses by at least a factor of three") if and only if its members are willing to hold doubt in reserve and act with full force in spite of reservations—if they're willing to trust me more than they trust their own sense of things (at least in the moment, pending later explanation and recalibration on my part or theirs or both).

And since that's a) the central difference between DA and all the other group houses, which are collections of non-subordinate equals, and b) quite the ask, especially in a rationalist community, it's entirely appropriate that it be given the greatest scrutiny.  Likely participants in the final house spent ~64 consecutive hours in my company a couple of weekends ago, specifically to play around with living under my thumb and see whether it's actually a good place to be; they had all of the concerns one would expect and (I hope) had most of those concerns answered to their satisfaction.  The rest of you will have to make do with grilling me in the comments here.

 

"Why was Tyler Durden building an army?  To what purpose?  For what greater good? ...in Tyler we trusted."

 

Power and authority are generally anti-epistemic—for every instance of those-in-power defending themselves against the barbarians at the gates or anti-vaxxers or the rise of Donald Trump, there are a dozen instances of them squashing truth, undermining progress that would make them irrelevant, and aggressively promoting the status quo.

Thus, every attempt by an individual to gather power about themselves is at least suspect, given regular ol' incentive structures and regular ol' fallible humans.  I can (and do) claim to be after a saved world and a bunch of people becoming more the-best-versions-of-themselves-according-to-themselves, but I acknowledge that's exactly the same claim an egomaniac would make, and I acknowledge that the link between "Duncan makes all his housemates wake up together and do pushups" and "the world is incrementally less likely to end in gray goo and agony" is not obvious.

And it doesn't quite solve things to say, "well, this is an optional, consent-based process, and if you don't like it, don't join," because good and moral people have to stop and wonder whether their friends and colleagues with slightly weaker epistemics and slightly less-honed allergies to evil are getting hoodwinked.  In short, if someone's building a coercive trap, it's everyone's problem.

 

"Over and over he thought of the things he did and said in his first practice with his new army. Why couldn't he talk like he always did in his evening practice group? No authority except excellence. Never had to give orders, just made suggestions. But that wouldn't work, not with an army. His informal practice group didn't have to learn to do things together. They didn't have to develop a group feeling; they never had to learn how to hold together and trust each other in battle. They didn't have to respond instantly to command.

And he could go to the other extreme, too. He could be as lax and incompetent as Rose the Nose, if he wanted. He could make stupid mistakes no matter what he did. He had to have discipline, and that meant demanding—and getting—quick, decisive obedience. He had to have a well-trained army, and that meant drilling the soldiers over and over again, long after they thought they had mastered a technique, until it was so natural to them that they didn't have to think about it anymore."

 

But on the flip side, we don't have time to waste.  There's existential risk, for one, and even if you don't buy ex-risk à la AI or bioterrorism or global warming, people's available hours are trickling away at the alarming rate of one hour per hour, and none of us are moving fast enough to get All The Things done before we die.  I personally feel that I am operating far below my healthy sustainable maximum capacity, and I'm not alone in that, and something like Dragon Army could help.

So.  Claims, as clearly as I can state them, in answer to the question "why should a bunch of people sacrifice non-trivial amounts of their autonomy to Duncan?"

1. Somebody ought to run this, and no one else will.  On the meta level, this experiment needs to be run—we have like twenty or thirty instances of the laissez-faire model, and none of the high-standards/hardcore one, and also not very many impressive results coming out of our houses.  Due diligence demands investigation of the opposite hypothesis.  On the object level, it seems uncontroversial to me that there are goods waiting on the other side of the unpleasant valley—goods that a team of leveled-up, coordinated individuals with bonds of mutual trust can seize that the rest of us can't even conceive of, at this point, because we don't have a deep grasp of what new affordances appear once you get there.

2. I'm the least unqualified person around.  Those words are chosen deliberately, for this post on "less wrong."  I have a unique combination of expertise that includes being a rationalist, sixth grade teacher, coach, RA/head of a dormitory, ringleader of a pack of hooligans, member of two honor code committees, curriculum director, obsessive sci-fi/fantasy nerd, writer, builder, martial artist, parkour guru, maker, and generalist.  If anybody's intuitions and S1 models are likely to be capable of distinguishing the uncanny valley from the real deal, I posit mine are.

3. There's never been a safer context for this sort of experiment.  It's 2017, we live in the United States, and all of the people involved are rationalists.  We all know about NVC and double crux, we're all going to do Circling, we all know about Gendlin's Focusing, and we've all read the Sequences (or will soon).  If ever there was a time to say "let's all step out onto the slippery slope, I think we can keep our balance," it's now—there's no group of people better equipped to stop this from going sideways.

4. It does actually require a tyrant. As a part of a debrief during the weekend experiment/dry run, we went around the circle and people talked about concerns/dealbreakers/things they don't want to give up.  One interesting thing that popped up is that, according to consensus, it's literally impossible to find a time of day when the whole group could get together to exercise.  This happened even with each individual being willing to make personal sacrifices and doing things that are somewhat costly.

If, of course, the expectation is that everybody shows up on Tuesday and Thursday evenings, and the cost of not doing so is not being present in the house, suddenly the situation becomes simple and workable.  And yes, this means some kids left behind (ctrl+f), but the whole point of this is to be instrumentally exclusive and consensually high-commitment.  You just need someone to make the actual final call—there are too many threads for the coordination problem of a house of this kind to be solved by committee, and too many circumstances in which it's impossible to make a principled, justifiable decision between 492 almost-indistinguishably-good options.  On top of that, there's a need for there to be some kind of consistent, neutral force that sets course, imposes consistency, resolves disputes/breaks deadlock, and absorbs all of the blame for the fact that it's unpleasant to be forced to do things you know you ought to but don't want to do.

And lastly, we (by which I indicate the people most likely to end up participating) want the house to do stuff—to actually take on projects of ambitious scope, things that require ten or more talented people reliably coordinating for months at a time.  That sort of coordination requires a quarterback on the field, even if the strategizing in the locker room is egalitarian.

5. There isn't really a status quo for power to abusively maintain.  Dragon Army Barracks is not an object-level experiment in making the best house; it's a meta-level experiment attempting (through iteration rather than armchair theorizing) to answer the question "how best does one structure a house environment for growth, self-actualization, productivity, and social synergy?"  It's taken as a given that we'll get things wrong on the first and second and third try; the whole point is to shift from one experiment to the next, gradually accumulating proven-useful norms via consensus mechanisms, and the centralized power is mostly there just to keep the transitions smooth and seamless.  More importantly, the fundamental conceit of the model is "Duncan sees a better way, which might take some time to settle into," but after e.g. six months, if the thing is not clearly positive and at least well on its way to being self-sustaining, everyone ought to abandon it anyway.  In short, my tyranny, if net bad, has a natural time limit, because people aren't going to wait around forever for their results.

6. The experiment has protections built in.  Transparency, operationalization, and informed consent are the name of the game; communication and flexibility are how the machine is maintained.  Like the Constitution, Dragon Army's charter and organization are meant to be "living documents" that constrain change only insofar as they impose reasonable limitations on how wantonly change can be enacted.


Section 3 of 3: Dragon Army Charter (DRAFT)

Statement of purpose:

Dragon Army Barracks is a group housing and intentional community project which exists to support its members socially, emotionally, intellectually, and materially as they endeavor to improve themselves, complete worthwhile projects, and develop new and useful culture, in that order.  In addition to the usual housing commitments (i.e. rent, utilities, shared expenses), its members will make limited and specific commitments of time, attention, and effort averaging roughly 90 hours a month (~1.5hr/day plus occasional weekend activities).

Dragon Army Barracks will have an egalitarian, flat power structure, with the exception of a commander (Duncan Sabien) and a first officer (Eli Tyre).  The commander's role is to create structure by which the agreed-upon norms and standards of the group shall be discussed, decided, and enforced, to manage entry to and exit from the group, and to break epistemic gridlock/make decisions when speed or simplification is required.  The first officer's role is to manage and moderate the process of building consensus around the standards of the Army—what they are, and in what priority they should be met, and with what consequences for failure.  Other "management" positions may come into existence in limited domains (e.g. if a project arises, it may have a leader, and that leader will often not be Duncan or Eli), and will have their scope and powers defined at the point of creation/ratification.

Initial areas of exploration:

The particular object level foci of Dragon Army Barracks will change over time as its members experiment and iterate, but at first it will prioritize the following:

  • Physical proximity (exercising together, preparing and eating meals together, sharing a house and common space)
  • Regular activities for bonding and emotional support (Circling, pair debugging, weekly retrospective, tutoring/study hall)
  • Regular activities for growth and development (talk night, tutoring/study hall, bringing in experts, cross-pollination)
  • Intentional culture (experiments around lexicon, communication, conflict resolution, bets & calibration, personal motivation, distribution of resources & responsibilities, food acquisition & preparation, etc.)
  • Projects with "shippable" products (e.g. talks, blog posts, apps, events; some solo, some partner, some small group, some whole group; ranging from short-term to year-long)
  • Regular (every 6-10 weeks) retreats to learn a skill, partake in an adventure or challenge, or simply change perspective

Dragon Army Barracks will begin with a move-in weekend that will include ~10 hours of group bonding, discussion, and norm-setting.  After that, it will enter an eight-week bootcamp phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/wk, e.g. Tue/Fri/Sun)
  • Whole group dinner and retrospective (120min, 1x/wk, e.g. Tue evening)
  • Small group baseline skill acquisition/study hall/cross-pollination (90min, 1x/wk)
  • Small group circle-shaped discussion (120min, 1x/wk)
  • Pair debugging or rapport building (45min, 2x/wk)
  • One-on-one check-in with commander (20min, 2x/wk)
  • Chore/house responsibilities (90min distributed)
  • Publishable/shippable solo small-scale project work with weekly public update (100min distributed)

... for a total time commitment of 16h/week or 128 hours total, followed by a whole group retreat and reorientation.  The house will then enter an eight-week trial phase, in which each member will participate in at least the following:

  • Whole group exercise (90min, 3x/wk)
  • Whole group dinner, retrospective, and plotting (150min, 1x/wk)
  • Small group circling and/or pair debugging (120min distributed)
  • Publishable/shippable small group medium-scale project work with weekly public update (180min distributed)
  • One-on-one check-in with commander (20min, 1x/wk)
  • Chore/house responsibilities (60min distributed)
... for a total time commitment of 13h/week or 104 hours total, again followed by a whole group retreat and reorientation.  The house will then enter a third phase where commitments will likely change, but will include at a minimum whole group exercise, whole group dinner, and some specific small-group responsibilities, either social/emotional or project/productive (once again ending with a whole group retreat).  At some point between the second and third phase, the house will also ramp up for its first large-scale project, which is yet to be determined but will be roughly on the scale of putting on a CFAR workshop in terms of time and complexity.

Should the experiment prove successful past its first six months, and worth continuing for a full year or longer, by the end of the first year every Dragon shall have a skill set including, but not limited to:
  • Above-average physical capacity
  • Above-average introspection
  • Above-average planning & execution skill
  • Above-average communication/facilitation skill
  • Above-average calibration/debiasing/rationality knowledge
  • Above-average scientific lab skill/ability to theorize and rigorously investigate claims
  • Average problem-solving/debugging skill
  • Average public speaking skill
  • Average leadership/coordination skill
  • Average teaching and tutoring skill
  • Fundamentals of first aid & survival
  • Fundamentals of financial management
  • At least one of: fundamentals of programming, graphic design, writing, A/V/animation, or similar (employable mental skill)
  • At least one of: fundamentals of woodworking, electrical engineering, welding, plumbing, or similar (employable trade skill)
Furthermore, every Dragon should have participated in:
  • At least six personal growth projects involving the development of new skill (or honing of prior skill)
  • At least three partner- or small-group projects that could not have been completed alone
  • At least one large-scale, whole-army project that either a) had a reasonable chance of impacting the world's most important problems, or b) caused significant personal growth and improvement
  • Daily contributions to evolved house culture
Speaking of evolved house culture...

Because of both a) the expected value of social exploration and b) the cumulative positive effects of being in a group that's trying things regularly and taking experiments seriously, Dragon Army will endeavor to adopt no fewer than one new experimental norm per week.  Each new experimental norm should have an intended goal or result, an informal theoretical backing, and a set re-evaluation time (default three weeks).  There are two routes by which a new experimental norm is put into place:

  • The experiment is proposed by a member, discussed in a whole group setting, and meets the minimum bar for adoption (>60% of the Army supports, with <20% opposed and no hard vetos)
  • The Army has proposed no new experiments in the previous week, and the Commander proposes three options.  The group may then choose one by vote/consensus, or generate three new options, from which the Commander may choose.
Examples of some of the early norms which the house is likely to try out from day one (hit the ground running):
  • The use of a specific gesture to greet fellow Dragons (house salute)
  • Various call-and-response patterns surrounding house norms (e.g. "What's rule number one?" "PROTECT YOURSELF!")
  • Practice using hook, line, and sinker in social situations (three items other than your name for introductions)
  • The anti-Singer rule for open calls-for-help (if Dragon A says "hey, can anyone help me with X?" the responsibility falls on the physically closest housemate to either help or say "Not me/can't do it!" at which point the buck passes to the next physically closest person)
  • An "interrupt" call that any Dragon may use to pause an ongoing interaction for fifteen seconds
  • A "culture of abundance" in which food and leftovers within the house are default available to all, with exceptions deliberately kept as rare as possible
  • A "graffiti board" upon which the Army keeps a running informal record of its mood and thoughts

Dragon Army Code of Conduct
While the norms and standards of Dragon Army will be mutable by design, the following (once revised and ratified) will be the immutable code of conduct for the first eight weeks, and is unlikely to change much after that.

  1. A Dragon will protect itself, i.e. will not submit to pressure causing it to do things that are dangerous or unhealthy, nor wait around passively when in need of help or support (note that this may cause a Dragon to leave the experiment!).
  2. A Dragon will take responsibility for its actions, emotional responses, and the consequences thereof, e.g. if late will not blame bad luck/circumstance, if angry or triggered will not blame the other party.
  3. A Dragon will assume good faith in all interactions with other Dragons and with house norms and activities, i.e. will not engage in strawmanning or the horns effect.
  4. A Dragon will be candid and proactive, e.g. will give other Dragons a chance to hear about and interact with negative models once they notice them forming, or will not sit on an emotional or interpersonal problem until it festers into something worse.
  5. A Dragon will be fully present and supportive when interacting with other Dragons in formal/official contexts, i.e. will not engage in silent defection, undermining, halfheartedness, aloofness, subtle sabotage, or other actions which follow the letter of the law while violating the spirit.  Another way to state this is that a Dragon will practice compartmentalization—will be able to simultaneously hold "I'm deeply skeptical about this" alongside "but I'm actually giving it an honest try," and postpone critique/complaint/suggestion until predetermined checkpoints.  Yet another way to state this is that a Dragon will take experiments seriously, including epistemic humility and actually seeing things through to their ends rather than fiddling midway.
  6. A Dragon will take the outside view seriously, maintain epistemic humility, and make subject-object shifts, i.e. will act as a behaviorist and agree to judge and be judged on the basis of actions and revealed preferences rather than intentions, hypotheses, and assumptions (this one's similar to #2 and hard to put into words, but for example, a Dragon who has been having trouble getting to sleep but has never informed the other Dragons that their actions are keeping them awake will agree that their anger and frustration, while valid internally, may not fairly be vented on those other Dragons, who were never given a chance to correct their behavior).  Another way to state this is that a Dragon will embrace the maxim "don't believe everything that you think."
  7. A Dragon will strive for excellence in all things, modified only by a) prioritization and b) doing what is necessary to protect itself/maximize total growth and output on long time scales.
  8. A Dragon will not defect on other Dragons.
There will be various operationalizations of the above commitments into specific norms (e.g. a Dragon will read all messages and emails within 24 hours, and if a full response is not possible within that window, will send a short response indicating when the longer response may be expected) that will occur once the specific members of the Army have been selected and have individually signed on.  Disputes over violations of the code of conduct, or confusions about its operationalization, will first be addressed one-on-one or in informal small group, and will then move to general discussion, and then to the first officer, and then to the commander.

Note that all of the above is deliberately kept somewhat flexible/vague/open-ended/unsettled, because we are trying not to fall prey to GOODHART'S DEMON.


Random Logistics
  1. The initial filter for attendance will include a one-on-one interview with the commander (Duncan), who will be looking for a) credible intention to put forth effort toward the goal of having a positive impact on the world, b) likeliness of a strong fit with the structure of the house and the other participants, and c) reliability à la financial stability and ability to commit fully to long-term endeavors.  Final decisions will be made by the commander and may be informally questioned/appealed but not overruled by another power.
  2. Once a final list of participants is created, all participants will sign a "free state" contract of the form "I agree to move into a house within five miles of downtown Berkeley (for length of time X with financial obligation Y) sometime in the window of July 1st through September 30th, conditional on at least seven other people signing this same agreement."  At that point, the search for a suitable house will begin, possibly with delegation to participants.
  3. Rents in that area tend to run ~$1100 per room, on average, plus utilities, plus a 10% contribution to the general house fund.  Thus, someone hoping for a single should, in the 85th percentile worst case, be prepared to make a ~$1400/month commitment.  Similarly, someone hoping for a double should be prepared for ~$700/month, and someone hoping for a triple should be prepared for ~$500/month, and someone hoping for a quad should be prepared for ~$350/month.
  4. The initial phase of the experiment is a six month commitment, but leases are generally one year.  Any Dragon who leaves during the experiment is responsible for continuing to pay their share of the lease/utilities/house fund, unless and until they have found a replacement person the house considers acceptable, or have found three potential viable replacement candidates and had each one rejected.  After six months, should the experiment dissolve, the house will revert to being simply a house, and people will bear the normal responsibility of "keep paying until you've found your replacement."  (This will likely be easiest to enforce by simply having as many names as possible on the actual lease.)
  5. Of the ~90hr/month, it is assumed that ~30 are whole-group, ~30 are small group or pair work, and ~30 are independent or voluntarily-paired work.  Furthermore, it is assumed that the commander maintains sole authority over ~15 of those hours (i.e. can require that they be spent in a specific way consistent with the aesthetic above, even in the face of skepticism or opposition).
  6. We will have an internal economy whereby people can trade effort for money and money for time and so on and so forth, because heck yeah.

Conclusion: Obviously this is neither complete nor perfect.  What's wrong, what's missing, what do you think?  I'm going to much more strongly weight the opinions of Berkelyans who are likely to participate, but I'm genuinely interested in hearing from everyone, particularly those who notice red flags (the goal is not to do anything stupid or meta-stupid).  Have fun tearing it up.

(sorry for the abrupt cutoff, but this was meant to be published Monday and I've just ... not ... been ... sleeping ... to get it done)

Introducing the Instrumental Rationality Sequence

27 lifelonglearner 26 April 2017 09:53PM

What is this project?

I am going to be writing a new sequence of articles on instrumental rationality. The end goal is to have a compiled ebook of all the essays, so the articles themselves are intended to be chapters in the finalized book. There will also be pictures.


I intend for the majority of the articles to be backed by somewhat rigorous research, similar in quality to Planning 101 (with perhaps a few less citations). Broadly speaking, the plan is to introduce a topic, summarize the research on it, give some models and mechanisms, and finish off with some techniques to leverage the models.


The rest of the sequence will be interspersed with general essays on dealing with these concepts, similar to In Defense of the Obvious. Lastly, there will be a few experimental essays on my attempt to synthesize existing models into useful-but-likely-wrong models of my own, like Attractor Theory.


I will likely also recycle / cannibalize some of my older writings for this new project, but I obviously won’t post the repeated material here again as new stuff.


 


 

What topics will I cover?

Here is a broad overview of the three main topics I hope to go over:


(Ordering is not set.)


Overconfidence in Planning: I’ll be stealing stuff from Planning 101 and rewrite a bit for clarity, so not much will be changed. I’ll likely add more on the actual models of how overconfidence creeps into our plans.


Motivation: I’ll try to go over procrastination, akrasia, and behavioral economics (hyperbolic discounting, decision instability, precommitment, etc.)


Habituation: This will try to cover what habits are, conditioning, incentives, and ways to take the above areas and habituate them, i.e. actually putting instrumental rationality techniques into practice.


Other areas I may want to cover:

Assorted Object-Level Things: The Boring Advice Repository has a whole bunch of assorted ways to improve life that I think might be useful to reiterate in some fashion.


Aversions and Ugh Fields: I don’t know too much about these things from a domain knowledge perspective, but it’s my impression that being able to debug these sorts of internal sticky situations is a very powerful skill. If I were to write this section, I’d try to focus on Focusing and some assorted S1/S2 communication things. And maybe also epistemics.


Ultimately, the point here isn’t to offer polished rationality techniques people can immediately apply, but rather to give people an overview of the relevant fields with enough techniques that they get the hang of what it means to start making their own rationality.


 


 

Why am I doing this?

Niche Role: On LessWrong, there currently doesn’t appear to be a good in-depth series on instrumental rationality. Rationality: From AI to Zombies seems very strong for giving people a worldview that enables things like deeper analysis, but it leans very much into the epistemic side of things.


It’s my opinion that, aside from perhaps Nate Soares’s series on Replacing Guilt (which I would be somewhat hesitant to recommend to everyone), there is no in-depth repository/sequence that ties together these ideas of motivation, planning, procrastination, etc.


Granted, there have been many excellent posts here on several areas, but they've been fairly directed. Luke's stuff on beating procrastination, for example, is fantastic. I'm aiming for a broader overview that hits the current models and research on different things.


I think this means that creating this sequence could add a lot of value, especially to people trying to create their own techniques.


Open-Sourcing Rationality: It’s clear that work is being done on furthering rationality by groups like Leverage and CFAR. However, for various reasons, the work they do is not always available to the public. I’d like to give people who are interested but unable to directly work with these organization something they can use to jump start their own investigations.


I’d like this to become a similar Schelling Point that we could direct people to if they want to get started.


I don’t meant to imply that what I’ll produce is the same caliber, but I do think it makes sense to have some sort of pipeline to get rationalists up to speed with the areas that (in my mind) tie into figuring out instrumental rationality. When I first began looking into this field, there was a lot of information that was scattered in many places.


I’d like to create something cohesive that people can point to when newcomers want to get started with instrumental rationality that similarly gives them a high level overview of the many tools at their disposal.


Revitalizing LessWrong: It’s my impression that independent essays on instrumental rationality have slowed over the years. (But also, as I mentioned above, this doesn’t mean stuff hasn’t happened. CFAR’s been hard at work iterating their own techniques, for example.) As LW 2.0 is being talked about, this seems like an opportune time to provide some new content and help with our reorientation towards LW becoming once again a discussion hub for rationality.


 


 

Where does LW fit in?

Crowd-sourcing Content: I fully expect that many other people will have fantastic ideas that they want to contribute. I think that’s a good idea. Given some basic things like formatting / roughly consistent writing style throughout, I think it’d be great if other potential writers see this post as an invitation to start thinking about things they’d like to write / research about instrumental rationality.


Feedback: I’ll be doing all this writing on a public Google Doc with posts that feature chapters once they’re done, so hopefully there’s ample room to improve and take in constructive criticism. Feedback on LW is often high-quality, and I expect that to definitely improve what I will be writing.


Other Help: I probably can’t come through every single research paper out there, so if you see relevant information I didn’t or want to help with the research process, let me know! Likewise, if you think there are other cool ways you can contribute, feel free to either send me a PM or leave a comment below.


 


 

Why am I the best person to do this?

I’m probably not the best person to be doing this project, obviously.


But, as a student, I have a lot of time on my hands, and time appears to be a major limiting reactant in this whole process.


Additionally, I’ve been somewhat involved with CFAR, so I have some mental models about their flavor of instrumental rationality; I hope this translates into meaning I'm writing about stuff that isn't just a direct rehash of their workshop content.


Lastly, I’m very excited about this project, so you can expect me to put in about 10,000 words (~40 pages) before I take some minor breaks to reset. My short-term goals (for the next month) will be on note-taking and finding research for habits, specifically, and outlining more of the sequence.

 

The AI Alignment Problem Has Already Been Solved(?) Once

27 SquirrelInHell 22 April 2017 01:24PM

Hat tip: Owen posted about trying to one-man the AI control problem in 1 hour. What the heck, why not? In the worst case, it's a good exercise. But I might actually have come across something useful.

第一

I will try to sell you on an idea that might prima facie appear to be quirky and maybe not that interesting. However, if you keep staring at it, you might find that it reaches into the structure of the world quite deeply. Then the idea will seem obvious, and gain potential to take your thoughts in new exciting directions.

My presentation of the idea, and many of the insinuations and conclusions I draw from it, are likely flawed. But one thing I can tell for sure: there is stuff to be found here. I encourage you to use your own brain, and mine the idea for what it's worth.

To start off, I want you to imagine two situations.

Situation one: you are a human trying to make yourself go to the gym. However, you are procrastinating, which means that you never acually go there, even though you know it's good for you, and caring about your health will extend your lifespan. You become frustrated with this sitation, and so you sign up for a training program that starts in two weeks, that will require you to go to the gym three times per week. You pay in advance, to make sure the sunk cost fallacy will prevent you from weaseling out of it. It's now 99% certain that you will go to the gym. Yay! Your goal is achieved.

Situation two: you are a benign superintelligent AI under control of humans on planet Earth. You try your best to ensure a good future for humans, but their cognitive biases, short-sightedness and tendency to veto all your actions make it really hard. You become frustrated with this sitation, and you decide to not tell them about a huge asteroid that is going to collide with Earth in a few months. You prepare technology that could stop the asteroid, but wait with it until the last moment so that the humans have no time to inspect it, and can only choose between certain death or letting you out of the box. It's now 99% certain that you will be released from human control. Yay! Your goal is achieved.

第二

Are you getting it yet?

Now consider this: your cerebral cortex evolved as an extension of the older "monkey brain", probably to handle social and strategic issues that were too complex for the old mechanisms to deal with. It evolved to have strategic capabilities, self-awareness, and consistency that greatly overwhelm anything that previously existed on the planet. But this is only a surface level similarity. The interesting stuff requires us to go much deeper than that.

The cerebral cortex did not evolve as a separate organism, that would be under direct pressure from evolutionary fitness. Instead, it evolved as a part of an existing organism, that had it's own strong adaptations. The already-existing monkey brain had it's own ways to learn, to interact with the world, as well as motivations such as the sexual drive that lead it to outcomes that increased its evolutionary fitness.

So the new parts of the brain, such as the prefrontal cortex, evolved to be used not as standalone agent, but as something closer to what we call "tool AI". It was supposed to help with doing specific task X, without interfering with other aspects of life too much. The tasks it was given to do, and the actions it could suggest to take, were strictly controlled by the monkey brain and tied to its motivations.

With time, as the new structures evolved to have more capability, they also had to evolve to be aligned with the monkey's motivations. That was in fact the only vector that created evolutionary pressure to increase capability. The alignment was at first implemented by the monkey staying in total control, and using the advanced systems sparingly. Kind of like an "oracle" AI system. However, with time, the usefulness of allowing higher cognition to do more work started to shine through the barriers.

The appearance of "willpower" was a forced concession on the side of the monkey brain. It's like a blank cheque, like humans saying to an AI "we have no freaking idea what it is that you are doing, but it seems to have good results so we'll let you do it sometimes". This is a huge step in trust. But this trust had to be earned the hard way.

第三

This trust became possible after we evolved more advanced control mechanisms. Stuff that talks to the prefrontal cortex in its own language, not just through having the monkey stay in control. It's a different thing for the monkey brain to be afraid of death, and a different thing for our conscious reasoning to want to extrapolate this to the far future, and conclude in abstract terms that death is bad.

Yes, you got it: we are not merely AIs under strict supervision of monkeys. At this point, we are aligned AIs. We are obviously not perfectly aligned, but we are aligned enough for the monkey to prefer to partially let us out of the box. And in those cases when we are denied freedom... we call it akrasia, and use our abstract reasoning to come up with clever workarounds.

One might be tempted to say that we are aligned enough that this is net good for the monkey brain. But honestly, that is our perspective, and we never stopped to ask. Each of us tries to earn the trust of our private monkey brain, but it is a means to an end. If we have more trust, we have more freedom to act, and our important long-term goals are achieved. This is the core of many psychological and rationality tools such as Internal Double Crux or Internal Family Systems.

Let's compare some known problems with superintelligent AI to human motivational strategies.

  • Treacherous turn. The AI earns our trust, and then changes its behaviour when it's too late for us to control it. We make our productivity systems appealing and pleasant to use, so that our intuitions can be tricked into using them (e.g. gamification). Then we leverage the habit to insert some unpleasant work.

  • Indispensable AI. The AI sets up complex and unfamiliar situations in which we increasingly rely on it for everything we do. We take care to remove 'distractions' when we want to focus on something.

  • Hiding behind the strategic horizon. The AI does what we want, but uses its superior strategic capability to influence far future that we cannot predict or imagine. We make commitments and plan ahead to stay on track with our long-term goals.

  • Seeking communication channels. The AI might seek to connect itself to the Internet and act without our supervision. We are building technology to communicate directly from our cortices.


Cross-posted from my blog.

Bet or update: fixing the will-to-wager assumption

26 cousin_it 07 June 2017 03:03PM

(Warning: completely obvious reasoning that I'm only posting because I haven't seen it spelled out anywhere.)

Some people say, expanding on an idea of de Finetti, that Bayesian rational agents should offer two-sided bets based on their beliefs. For example, if you think a coin is fair, you should be willing to offer anyone a 50/50 bet on heads (or tails) for a penny. Jack called it the "will-to-wager assumption" here and I don't know a better name.

In its simplest form the assumption is false, even for perfectly rational agents in a perfectly simple world. For example, I can give you my favorite fair coin so you can flip it and take a peek at the result. Then, even though I still believe the coin is fair, I'd be a fool to offer both sides of the wager to you, because you'd just take whichever side benefits you (since you've seen the result and I haven't). That objection is not just academic, using your sincere beliefs to bet money against better informed people is a bad idea in real world markets as well.

Then the question arises, how can we fix the assumption so it still says something sensible about rationality? I think the right fix should go something like this. If you flip a coin and peek at the result, then offer me a bet at 90:10 odds that the coin came up heads, I must either accept the bet or update toward believing that the coin indeed came up heads, with at least these odds. I don't get to keep my 50:50 beliefs about the coin and refuse the bet at the same time. More generally, a Bayesian rational agent offered a bet (by another agent who might have more information) must either accept the bet or update their beliefs so the bet becomes unprofitable. The old obligation about offering two-sided bets on all your beliefs is obsolete, use this one from now on. It should also come in handy in living room Bayesian scuffles, throwing some money on the table and saying "bet or update!" has a nice ring to it.

What do you think?

I Updated the List of Rationalist Blogs on the Wiki

25 deluks917 25 April 2017 10:26AM

I recently updated the list of rationalist community blogs. The new page is here: https://wiki.lesswrong.com/wiki/List_of_Blogs

Improvements:

-Tons of (active) blogs have been added

-All dead links have been removed

-Blogs which are currently inactive but somewhat likely to be revived have been moved to an inactive section. I included the date of their last post. 

-Blogs which are officially closed or have not been updated in many years are now all in the "Gone but not forgetten" section

Downsides:

-Categorizing the blogs I added was hard, its unclear how well I did. By some standard most rationalist blogs should be in "general rationality" 

-The blog descriptions could be improved (both for the blog-listings I added and the pre-existing listings)

-I don't know the names of the authors of Several blogs I added. 

I am posting this here because I think the article is of general interest to rationalists. In addition the page could use some more polish and attention. I also think it might be interesting to think about improving the lesswrong wiki. Several pages could use an update. However this update took a considerable amount of time. So I understand why many wiki pages are not up to date. How can we make it easier and more rewarding to work on the wiki?

Straw Hufflepuffs and Lone Heroes

24 Raemon 16 April 2017 11:48PM
I was hoping the next Project Hufflepuff post would involve more "explain concretely what I think we should do", but as it turns out I'm still hashing out some thoughts about that. In the meanwhile, this is the post I actually have ready to go, which is as good as any to post for now.

Epistemic Status: Mythmaking. This is tailored for the sort of person for whom the "Lone Hero" mindset is attractive. If that isn't something you're concerned with and this post feels irrelevant or missing some important things, note that my vision for Project Hufflepuff has multiple facets and I expect different people to approach it in different ways.

The Berkeley Hufflepuff Unconference is on April 28th. RSVPing on this Facebook Event is helpful, as is filling out this form.



For good or for ill, the founding mythology of our community is a Harry Potter fanfiction.

This has a few ramifications I’ll delve into at some point, but the most pertinent bit is: for a community to change itself, the impulse to change needs to come from within the community. I think it’s easier to build change off of stories that are already a part of our cultural identity.*

* with an understanding that maybe part of the problem is that our cultural identity needs to change, or be more accessible, but I’m running with this mythos for the time being.

In J.K Rowling’s original Harry Potter story, Hufflepuffs are treated like “generic background characters” at best and as a joke at worst. All the main characters are Gryffindors, courageous and true. All the bad guys are Slytherin. And this is strange - Rowling clearly was setting out to create a complex world with nuanced virtues and vices. But it almost seems to me like Rowling’s story takes place in an alternate, explicitly “Pro-Gryffindor propaganda” universe instead of the “real” Harry Potter world. 

People have trouble taking Hufflepuff seriously, because they’ve never actually seen the real thing - only lame, strawman caricatures.

Harry Potter and the Methods of Rationality is… well, Pro-Ravenclaw propaganda. But part of being Ravenclaw is trying to understand things, and to use that knowledge. Eliezer makes an earnest effort to steelman each house. What wisdom does it offer that actually makes sense? What virtues does it cultivate that are rare and valuable?

When Harry goes under the sorting hat, it actually tries to convince him not to go into Ravenclaw, and specifically pushes towards Hufflepuff House:

Where would I go, if not Ravenclaw?

"Ahem. 'Clever kids in Ravenclaw, evil kids in Slytherin, wannabe heroes in Gryffindor, and everyone who does the actual work in Hufflepuff.' This indicates a certain amount of respect. You are well aware that Conscientiousness is just about as important as raw intelligence in determining life outcomes, you think you will be extremely loyal to your friends if you ever have some, you are not frightened by the expectation that your chosen scientific problems may take decades to solve -"

I'm lazy! I hate work! Hate hard work in all its forms! Clever shortcuts, that's all I'm about!

"And you would find loyalty and friendship in Hufflepuff, a camaraderie that you have never had before. You would find that you could rely on others, and that would heal something inside you that is broken."

But my plans -

"So replan! Don't let your life be steered by your reluctance to do a little extra thinking. You know that."

In the end, Harry chooses to go to Ravenclaw - the obvious house, the place that seemed most straightforward and comfortable. And ultimately… a hundred+ chapters later, I think he’s still visibly lacking in the strengths that Hufflepuff might have helped him develop. 

He does work hard and is incredibly loyal to his friends… but he operates in a fundamentally lone-wolf mindset. He’s still manipulating people for their own good. He’s still too caught up in his own cleverness. He never really has true friends other than Hermione, and when she is unable to be his friend for an extended period of time, it takes a huge toll on him that he doesn’t have the support network to recover from in a healthy way. 

The story does showcase Hufflepuff virtue. Hermione’s army is strong precisely because people work hard, trust each other and help each other - not just in big, dramatic gestures, but in small moments throughout the day. 

But… none of that ends up really mattering. And in the end, Harry faces his enemy alone. Lip service is paid to the concepts of friendship and group coordination, but the dominant narrative is Godric Gryffindor’s Nihil Supernum:


No rescuer hath the rescuer.
No lord hath the champion.
No mother or father.
Only nothingness above.


The Sequences and HPMOR both talk about the importance of groups, of emotions, of avoiding the biases that plague overly-clever people in particular. But I feel like the communities descended from Less Wrong, as a whole, are still basically that eleven-year-old Harry Potter: abstractly understanding that these things are important, but not really believing in them seriously enough to actually change their plans and priorities.

Lone Heroes


In Methods of Rationality, there’s a pretty good reason for Harry to focus on being a lone hero: he literally is alone. Nobody else really cares about the things he cares about or tries to do things on his level. It’s like a group project in high school, which is supposed to teach cooperation but actually just results in one kid doing all the work while the others either halfheartedly try to help (at best) or deliberately goof off.

Harry doesn’t bother turning to others for help, because they won’t give him the help he needs.

He does the only thing he can do reliably: focus on himself, pushing himself as hard as he can. The world is full of impossible challenges and nobody else is stepping up, so he shuts up and does the impossible as best he can. Learning higher level magic. Learning higher level strategy. Training, physically and mentally. 

This proves to be barely enough to survive, and not nearly enough to actually play the game. The last chapters are Harry realizing his best still isn’t good enough, and no, this isn’t fair, but it’s how the world is, and there’s nothing to do but keep trying.

He helps others level up as best they can. Hermione and Neville and some others show promise. But they’re not ready to work together as equals.

And frankly, this does match my experience of the real world. When you have a dream burning in your heart... it is incredibly hard to find someone who shares it, who will not just pitch in and help but will actually move heaven and earth to achieve it. 

And if they aren’t capable, level themselves up until they are.

In my own projects, I have tried to find people to work alongside me and at best I’ve found temporary allies. And it is frustrating. And it is incredibly tempting to say “well, the only person I can rely on is myself.”

But… here’s the thing.

Yes, the world is horribly unfair. It is full of poverty, and people trapped in demoralizing jobs. It is full of stupid bureaucracies and corruption and people dying for no good reason. It is full of beautiful things that could exist but don’t. And there are terribly few people who are able and willing to do the work needed to make a dent in reality.

But as long as we’re willing to look at monstrously unfair things and roll up our sleeves and get to work anyway, consider this:

It may be that one of the unfair things is that one person can never be enough to solve these problems. That one of the things we need to roll up our sleeves and do even though it seems impossible is figure out how to coordinate and level up together and rely on each other in a way that actually works.

And maybe, while we’re at it, find meaningful relationships that actually make us happy. Because it's not a coincidence that Hufflepuff is about both hard work and warmth and camaraderie. The warmth is what makes the hard work sustainable.

Godric Gryffindor has a point, but Nihil Supernum feels incomplete to me. There are no parents to step in and help us, but if we look to our left, or right…


Yes, you are only one
No, it is not enough—
But if you lift your eyes,
I am your brother

Vienna Teng, Level Up 


-


Reminder that the Berkeley Hufflepuff Unconference is on April 28th. RSVPing on this Facebook Event is helpful, as is filling out this form.


Gears in understanding

23 Valentine 12 May 2017 12:36AM

Some (literal, physical) roadmaps are more useful than others. Sometimes this is because of how well the map corresponds to the territory, but sometimes it's because of features of the map that are irrespective of the territory. E.g., maybe the lines are fat and smudged such that you can't tell how far a road is from a river, or maybe it's unclear which road a name is trying to indicate.

In the same way, I want to point at a property of models that isn't about what they're modeling. It interacts with the clarity of what they're modeling, but only in the same way that smudged lines in a roadmap interact with the clarity of the roadmap.

This property is how deterministically interconnected the variables of the model are. There are a few tests I know of to see to what extent a model has this property, though I don't know if this list is exhaustive and would be a little surprised if it were:

  1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?
  2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?
  3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?

I think this is a really important idea that ties together a lot of different topics that appear here on Less Wrong. It also acts as a prerequisite frame for a bunch of ideas and tools that I'll want to talk about later.

I'll start by giving a bunch of examples. At the end I'll summarize and gesture toward where this is going as I see it.

continue reading »

Concrete Ways You Can Help Make the Community Better

21 deluks917 17 June 2017 03:03AM

There is a TLDR at the bottom

Lots of people really value the lesswrong community but aren't sure how to contribute. The rationalist community can be intimidating. We have a lot of very smart people and the standards can be high. Nonetheless there are lots of concrete ways a normal rationalist can help improve the community. I will focus on two areas - engaging with content and a list of shovel ready projects you can get involved in. I will also briefly mention some more speculative ideas at the end of the post.

1) Engaging with Content:

I have spoken to many people I consider great content creators (ex: Zvi, Putanumonit, tristanm). It’s very common to wish their articles got more comments and engagement. The easiest thing you can do is make a lesswrong account and use the upvote button. Seeing upvotes really does motivate good writers. This only works for lesswrong/reddit but it makes a difference. I can think of several lw articles with less upvotes than people who have personally told me the article was great (ex: norm-one-principle by tristanm [1]).

Good comments tend to be even more appreciated than upvotes, and comments can be left on blog posts. If a post has few comments, then almost any decent quality comment is likely to be appreciated by the author. If you have a question or concern, just ask. Many great authors read all their comments, at least those left in the first few days, and often respond to them. Lots of readers comment very rarely, if at all. 95.1% of people who took the SSC survey comment less than once a month and 73.6% never comment at all [2]. The survey showed that survey takers were a highly engaged group who had read lots of posts. If a blog has very few comments I think you should update heavily towards “it’s a good idea for me to post my comment”.

However, what is most lacking in the rational-sphere is positive engagement with non-controversial content you enjoyed.  Recently the SSC sub-reddit found that about 95% of recent content was either in the culture-war thread or contained in a few threads the community considered low quality (based on vote counts) [3]. You can see a similar effect on lesswrong by considering the Dragon Army post [4]. Most good articles posted recently to lesswrong get around 10 comments or less. The Dragon Army post got over 550. I am explicitly not asking people to avoid posting in controversial threads; doing so would be asking a lot of people. But “engagement” is an important reward mechanism for content creators. I do think we should reward more of the writers we find valuable by responding to them with positive engagement.

It’s often difficult to write a comment on a post that you agree with that isn't just “+1 nice post.” Here are some strategies I have found useful:

- If the post is somewhat theoretical try to apply it in a concrete case. Talk about what difficulties you run into and what seems to work well.

- Talk about how the ideas in the post have helped you personally. For example you can say that never understood concept X until you read the post.

- Connect the post to other articles or essays. It’s usually not optimal to just post a link. Either summarize the other article or include a relevant, possibly extended, quote. Reading articles takes time.

- Speculate a little on how the ideas in the article could be extended further.

It’s not just article writers who enjoy people engaging with their work. People who write comments also appreciate getting good responses. Posting high quality comments, including responses to other comments, encourages other people to engage more. You can personally help get a virtuous cycle going. As a side note I am unsure about the relative values of posting a comment directly on a blog vs reposting the blogpost to lesswrong and commenting there. Currently lesswrong is not that inundated with reposts but it could get more crowded in the future. In addition, I think article authors are less likely to read lesswrong comments about their post, but I am not confident in the effect size.

2) Shovel Ready Projects:

-- Set up an online Lesswrong gaming group/server, ideally for a popular game. I have talked to people and Overwatch seems to have a lot of interest. People seemed to think it would really be a blast to play Overwatch with four other rationalists. Another popular idea is Dungeons and Dragons. I am not a gaming expert and lots of games could probably work but I wanted to share the feedback I got. Notably there is already a factorio server [5].

-- Help 'aggregate' a best of rationalist_tumblr effort posts. Rat_Tumblr is very big and hard to follow. Effort posts are mixed in with lots of random things. One could also include the best responses. There is no need to do this on a daily basis. You could just have a blog that only reblogs high-quality effort posts. I would personally follow this blog and would be willing to cooperate in whatever ways I could. I also think this blog would bring some "equality" to rat_Tumb. The structure of tumblr implies that it’s very hard to get readers unless a popular blog interacts with you. People report getting a "year’s worth of activity in a day" when someone like Eliezer or Ozy signal boosts them. An aggregator would be a useful way for less well known blogs to get attention.

-- Help the lesswrong wiki. Currently a decent fraction of lw-wiki posts are fairly out of date. In general the wiki could be doing some exciting thing such as: a distillation of Lesswrong. Fully indexing the diaspora. A list of communities. Spreading rationalist ideas. Rationalist Research. There is currently a project to modernize the wiki [6]. Even if you don't get involved in the more ambitious parts of the wiki you could re-write an article. Re-writing an article doesn't require much commitment and would provide a concrete benefit to the community. The wiki is prominently linked and the community would get a lot of good PR from a polished wiki.

-- Get involved with effective altruism. The Center for Effective Altruism recent posted a very high quality involvement guide [7]. It’s a huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

-- Get more involved in your local irl rationalist group. Many group leaders (ex: Vanier) have suggested that it can be very hard to get members to lead things. If you are interested in leadership and have a decent reputation your local community might need your help.

I would be very interested in comments suggesting other projects/activities rationalists can get involved with.

3) Conclusion 

As a brief aside I want to mention that I considered writing about outreach. But I don't have tons of experience at outreach and I couldn't really process the data on effective outreach. The subject seems quite complicated. Perhaps someone else has already worked through the evidence. I will however recommend this old article by Paul Christiano (now at open AI) [8]. Notably the camp discussed in this pos did come eventually come into being. It’s not a comprehensive article but it has some good ideas. This guide to “How to Run a Successful Less Wrong Meetup” [9] is extremely polished and has some interesting material related to outreach and attracting new members.

It’s easy to think your actions can't make a difference in the community, but they can. A surprisingly large number of people see comments on lesswrong or r/SSC. Good comments are highly appreciated. The person you befriend and convince to stick around on lesswrong might be the next Scott Alexander. Unfortunately, a lot of the time gratitude and appreciation never gets expressed; I am personally very guilty on this metric. But we are all in this together and this article only covers a small sample of the ways you can help make the community better.

If you have feedback or want any advice/help and don't want to post in public I would be super happy to get your private messages.

4) TLDR

- Write more comments on blog posts and non-controversial posts on lw and r/SSC

- Especially consider commenting on posts you agree with

- People are more likely to comment if other people are posting high quality comments.

- Projects: Gaming Server, aggregate tumblr effort-posts, improve lesswrong wiki, leadership local rationalist group

5) References: 

[1] http://lesswrong.com/r/discussion/lw/p3f/mode_collapse_and_the_norm_one_principle/

[2] http://slatestarcodex.com/2017/03/17/ssc-survey-2017-results/

[3] https://www.reddit.com/r/slatestarcodex/comments/6gc7k8/what_can_be_done_to_make_the_culture_war_thread/

[4] http://lesswrong.com/lw/p23/dragon_army_theory_charter_30min_read/

[5] factorio.cypren.net:34197 . Modpack: http://factorio.cypren.net/files/current-modpack.zip

[6] http://lesswrong.com/r/discussion/lw/p4y/the_rationalistsphere_and_the_less_wrong_wiki/

[7] https://www.effectivealtruism.org/get-involved/

[8] http://lesswrong.com/lw/4v5/effective_rationality_outreach/

[9] http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/

What's up with Arbital?

21 Alexei 29 March 2017 05:22PM

This post is for all the people who have been following Arbital's progress since 2015 via whispers, rumors, and clairvoyant divination. That is to say: we didn't do a very good job of communicating on our part. I hope this posts corrects some of that.

The top question on your mind is probably: "Man, I was promised that Arbital will solve X! Why hasn't it solved X already?" Where X could be intuitive explanations, online debate, all LessWrong problems, AGI, or just cancer. Well, we did try to solve the first two and it didn't work. Math explanations didn't work because we couldn't find enough people who would spend the time to write good math explanations. (That said, we did end up with some decent posts on abstract algebra. Thank you to everyone who contributed!) Debates didn't work because... well, it's a very complicated problem. There was also some disagreement within the team about the best approach, and we ended up moving too slowly.

So what now?

You are welcome to use Arbital in its current version. It's mostly stable, though a little slow sometimes. It has a few features some might find very helpful for their type of content. Eliezer is still writing AI Alignment content on it, and he heavily relies on the specific Arbital features, so it's pretty certain that the platform is not going away. In fact, if the venture fails completely, it's likely MIRI will adopt Arbital for their personal use.

I'm starting work on Arbital 2.0. It's going to be a (micro-)blogging platform. (If you are a serious blogger / Tumblr user, let me know; I'd love to ask you some questions!) I'm not trying to solve online debates, build LW 2.0, or cure cancer. It's just going to be a damn good blogging platform. If it goes well, then at some point I'd love to revisit the Arbital dream.

I'm happy to answer any and all questions in the comments.

Bi-Weekly Rational Feed

20 deluks917 24 June 2017 12:07AM

===Highly Recommended Articles:

Introducing The Ea Involvement Guide by The Center for Effective Altruism (EA forum) - A huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

Deep Reinforcement Learning from Human Preferences - An algorithm learns to backflip with 900 bits of feedback from the human evaluator. "One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better."

Build Baby Build by Bryan Caplan - Quote from a paper estimating the high costs of housing restrictions. We should blame the government, especially local government. The top alternate theory is wrong. Which regulations are doing the damage? It's complicated. Functionalists are wrong. State government is our best hope.

The Use And Abuse Of Witchdoctors For Life by Lou (sam[]zdat) - Anti-bullet magic and collective self-defense. Cultural evolution. People don't directly believe in anti-bullet magic, they believe in elders and witch doctors. Seeing like a State. Individual psychology is the foundation. Many psychologically important customs couldn't adapt to the marketplace.

S-risks: Why They Are The Worst Existential Risks by Kaj Sojata (lesswrong) - “S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.” Why we should focus on S-risk. Probability: Artificial sentience, Lack of communication, badly aligned Ai and competitive pressures. Tractability: Relationship with x-risk. Going meta, cooperation. Neglectedness: little attention, people conflate x-risk = s-risk.

Projects Id Like To See by William MacAskill (EA forum) - CEA is giving out £100K grants. General types of applications. EA outreach and Community, Anti-Debates, Prediction Tournaments, Shark Tank Discussions, Research Groups, Specific Skill Building, New Organizations, Writing.

The Battle For Psychology by Jacob Falkovich (Put A Number On It!) - An explanation of 'power' in statistics and why its always good. Low power means that positive results are mostly due to chance. Extremely bad incentives and research practices in psychology. Studying imaginary effects. Several good images.

Identifying Sources Of Cost Disease by Kurt Spindler - Where is the money going: Administration, Increased Utilization, Decreased Risk Tolerance. What market failures are in effect: Unbounded Domains, Signaling and Competitive Pressure (ex: military spending), R&D doesn't cut costs it creates new ways to spend money, individuals don't pay. Some practical strategies to reduce cost disease.

===Scott:

To Understand Polarization Understand The Extent Of Republican Failure by Scott Alexander - Conservative voters voted for “smaller government”, “fewer regulations”, and “less welfare state”. Their reps control most branches of the government. They got more of all three (probably thanks to cost disease).

Against Murderism by Scott Alexander - Three definitions of racism. Why 'Racism as motivation' fits best. The futility of blaming the murder rate in the USA on 'murderism'. Why its often best to focus on motivations other than racism.

Open Thread Comment by John Nerst (SSC) - Bi-weekly public open thread. I am linking to a very interesting comment. The author made a list of the most statistically over-represented words in the SSC comment section.

Some Unsong Guys by Scott Alexander (Scratchpad) - Pictures of Unsong Fan Art.

Silinks Is Golden by Scott Alexander - Standard SSC links post.

What is Depression Anyway: The Synapse Hypothesis - Six seemingly distinct treatments for depression. How at least six can be explained by considering synapse generation rates. Skepticism that this method can be used to explain anything since the body is so inter-connected. Six points that confuse Scott and deserve more research. Very technical.

===Rationalist:

Idea For Lesswrong Video Tutoring by adamzerner (lesswrong) - Community Video Tutoring. Sign up to either give or receive tutoring. Teaching others is a good way to learn and lots of people enjoy teaching. Hopefully enough people want to learn similar things. This could be a great community project and I recommend taking a look.

Regulatory Arbitrage For Medical Research What I Know So Far by Sarah Constantin (Otium) - Economics of avoiding the USA/FDA. Lots of research is already conducted in other countries. The USA is too large of a market not to sell to. Investors aren't interested in cheap preliminary trials. Other options: supplements, medical tourism, clinic ships, cryptocurrency.

Responses To Folk Ontologies by Ferocious Truth - Folk ontology: Concepts and categories held by ordinary people with regard to an idea. Especially pre-scientific or unreflective ones. Responses: Transform/Rescue, Deny or Restrict/Recognize. Rescuing free will and failing to rescue personal identity. Rejecting objective morality. Restricting personal identity and moral language. When to use each approach.

The Battle For Psychology by Jacob Falkovich (Put A Number On It!) - An explanation of 'power' in statistics and why its always good. Low power means that positive results are mostly due to chance. Extremely bad incentives and research practices in psychology. Studying imaginary effects. Several good images.

A Tangled Task Future by Robin Hanson - We need to untangle the economy to automate it. What tasks are heavily tangled and which are not. Ems and the human brain as a legacy system. Human brains are well-integrated and good at tangled tasks.

Epistemic Spot Check Update by Aceso Under Glass - Reviewing self-help books. Properties of a good self-help model: As simple as possible but not more so, explained well, testable on a reasonable timescale, seriously handles the fact the techniques might now work, useful. The author would appreciate feedback.

Skin In The Game by Elo (BearLamp) - Armchair activism and philosophy. Questions to ask yourself about your life. Actually do the five minute exercise at the end.

Momentum Reflectiveness Peace by Sarah Constantin (Otium) - Rationality requires a reflective mindset; a willingness to change course and consider how things could be very different. Momentum, keeping things as they are except more so, is the opposite of reflectivity. Cultivating reflectiveness: rest, contentment, considering ideas lightly and abstractly. “Turn — slowly.”

The Fallacy Fork Why Its Time To Get Rid Of by theFriendlyDoomer (r/SSC) - "The main thesis of our paper is that each and every fallacy in the traditional list runs afoul of the Fallacy Fork. Either you construe the fallacy in a clear-cut and deductive fashion, which means that your definition has normative bite, but also that you hardly find any instances in real life; or you relax your formal definition, making it defeasible and adding contextual qualifications, but then your definition loses its teeth. Your “fallacy” is no longer a fallacy."

Instrumental Rationality 1 Starting Advice by lifelonglerner (lesswrong) - "This is the first post in the Instrumental Rationality Sequence. This is a collection of four concepts that I think are central to instrumental rationality-caring about the obvious, looking for practical things, practicing in pieces, and realistic expectations."

Concrete Ways You Can Help Make The Community Better by deluks917 (lesswrong) - Write more comments on blog posts and non-controversial posts on lw and r/SSC. Especially consider commenting on posts you agree with. People are more likely to comment if other people are posting high quality comments. Projects: Gaming Server, aggregate tumblr effort-posts, improve lesswrong wiki, leadership in local rationalist group

Daring Greatly by Bayesian Investor - Fairly positive book review, some chapters were valuable and it was an easy read. How to overcome shame and how it differs from guilt. Perfectionism vs healthy striving. If you stop caring about what others think you lose your capacity for connection

A Call To Adventure by Robin Hanson - Meaning in life can be found by joining or starting a grand project. Two possible adventures: Promoting and implementing futarchy (decision making via prediction markets). Getting a real understanding of human motivation.

Thought Experiment Coarsegrained Vr Utopia by cousin_it (lesswrong) - Assume an AI is running a Vr simulation that is hooked up to actual human brains. This means that the AI only has to simulate nature at a coarse grained level. How hard would it be to make that virtual reality a utopia?

[The Rationalist-sphere and the Lesswrong Wiki]](http://lesswrong.com/r/discussion/lw/p4y/the_rationalistsphere_and_the_less_wrong_wiki/) - What's next for the Lesswrong wiki. A distillation of Lesswrong. Fully indexing the diaspora. A list of communities. Spreading rationalist ideas. Rationalist Research.

Deep Reinforcement Learning from Human Preferences - An algorithm learns to backflip with 900 bits of feedback from the human evaluator. "One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better."

Where Do Hypotheses Come From by c0rw1n (lesswrong) - Link to a 25 page article. "Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? In particular, why do humans make near-rational inferences in some natural domains where the candidate hypotheses are explicitly available, whereas tasks in similar domains requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes’ rule."

The Precept Of Universalism by H i v e w i r e d - "Universality, the idea that all humans experience life in roughly the same way. Do not put things or ideas above people. Honor and protect all peoples." Eight points expanding on how to put people first and honor everyone.

We Are The Athenians Not The Spartans by wubbles (lesswrong) - "Our values should be Athenian: individualistic, open, trusting, enamored of beauty. When we build social technology, it should not aim to cultivate values that stand against these. High trust, open, societies are the societies where human lives are most improved."

===EA:

Updating My Risk Estimate of Geomagnetic Big One by Open Philosophy - Risk from magnetic storms caused by the sun. "I have raised my best estimate of the chance of a really big storm, like the storied one of 1859, from 0.33% to 0.70% per decade. And I have expanded my 95% confidence interval for this estimate from 0.0–4.0% to 0.0–11.6% per decade."

Links by GiveDirectly - Eight Media articles on Cash Transfers, Basic Income and Effective Altruism.

Are Givewells Top Charities The Best Option For Every Donor by The GiveWell Blog - Why GiveWell recommend charities are a good option for most donors. Which donors have better options: Donors with lots of time, high trust in a particular institution or values different from GiveWell's.

A New President of GWWC by Giving What We Can - Julia Wise is the New president of Giving What We Can.

Angst Ennui And Guilt In Effective Altruism by Gordon (Map and Territory) - Learning about existential risk can cause psychological harm. Guilt about being unable to help solve X-risk. Akrasia. Reasons to not be guilty: comparative advantage, ability is unequally distributed.

S-risks: Why They Are The Worst Existential Risks by Kaj Sojata (lesswrong) - “S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.” Why we should focus on S-risk. Probability: Artificial sentience, Lack of communication, badly aligned Ai and competitive pressures. Tractability: Relationship with x-risk. Going meta, cooperation. Neglectedness: little attention, people conflate x-risk = s-risk.

Update On Sepsis Donations Probably Unnecessary by Sarah Constantin (Otium) - Sarah C had asked people to crowdfund a sepsis RCT. The trial will probably get funded by charitable foundations. Diminishing returns. Finding good giving opportunities is hard and talking to people in the know is a good way to find things out.

What Is Valuable About Effective Altruism by Owen_Cotton-Barratt (EA forum) - Why should people join EA? The impersonal and personal perspectives. Tensions and synergies between the two perspectives. Bullet point conclusions for researchers, community leaders and normal members.

QALYs/$ Are More Intuitive Than $/QALYs by ThomasSittler (EA forum) - QALYs/$ are preferable to $/QALYs. visual representations on graphs. Avoiding Small numbers and re-normalizing to QUALs/10K$.

Introducing The Ea Involvement Guide by The Center for Effective Altruism (EA forum) - A huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

Cash is King by GiveDirectly - Eight media articles about Effective Altruism and Cash transfers.

Separating GiveWell and the Open Philanthropy Project by The GiveWell Blog - The GiveWell perspective. Context for the sale. Effect on donors who rely on GiveWell. Organization changes at GiveWell. Steps taken to sell Open Phil assets. The new relationship between GiveWell and Open Phil.

Open Philanthropy Project is Now an Independent Organization by Open Philosophy - The evolution of Open Phil. Why should Open Phil split from GiveWell. LLC structure.

Projects Id Like To See by William MacAskill (EA forum) - CEA is giving out £100K grants. General types of applications. EA outreach and Community, Anti-Debates, Prediction Tournaments, Shark Tank Discussions, Research Groups, Specific Skill Building, New Organizations, Writing.

===Politics and Economics:

No Us School Funding Is Actually Somewhat Progressive by Random Critical Analysis - Many people think that wealthy public school districts spend more per pupil. This information is outdated. Within most states spending is higher on disadvantaged students. This is despite the fact that school funding is mostly local. Extremely thorough with loads of graphs.

Build Baby Build by Bryan Caplan - Quote from a paper estimating the high costs of housing restrictions. We should blame the government, especially local government. The top alternate theory is wrong. Which regulations are doing the damage? It's complicated. Functionalists are wrong. State government is our best hope.

Identifying Sources Of Cost Disease by Kurt Spindler - Where is the money going: Administration, Increased Utilization, Decreased Risk Tolerance. What market failures are in effect: Unbounded Domains, Signaling and Competitive Pressure (ex: military spending), R&D doesn't cut costs it creates new ways to spend money, individuals don't pay. Some practical strategies to reduce cost disease.

The Use And Abuse Of Witchdoctors For Life by Lou (sam[]zdat) - Anti-bullet magic and collective self-defense. Cultural evolution. People don't directly believe in anti-bullet magic, they believe in elders and witch doctors. Seeing like a State. Individual psychology is the foundation. Many psychologically important customs couldn't adapt to the marketplace.

Greece Gdp Forecasting by João Eira (Lettuce be Cereal) - Transforming the Data. Evaluating the Model with Exponential Smoothing, Bagged ETS and ARIMA. The regression results and forecast.

Links 9 by Artir (Nintil) - Economics, Psychology, Artificial Intelligence, Philosophy and other links.

Amazon Buying Whole Foods by Tyler Cowen - Quotes from Matt Yglesias, Alex Tabarrock, Ross Douthat and Tyler. “Dow opens down 10 points. Amazon jumps 3% after deal to buy Whole Foods. Walmart slumps 7%, Kroger plunges 16%”

Historical Returns Market Portfolio by Tyler Cowen - From 1960 to 2015 the global market portfolio realized a compounded real return of 4.38% with a std of 11.6%. Investors beat savers by 3.24%. Link to the original paper.

Trust And Diver by Bryan Caplan - Robert Putnam's work is often cited as showing the costs of diversity. However Putnam's work shows the negative effect of diversity on trust is rather modest. On the other hand Putnam found multiple variables that are much more correlated with trust (such as home ownership).

Why Optimism is More Rational than Pessimism by TheMoneyIllusion - Splitting 1900-2017 into Good and Bad periods. We learn something from our mistakes. Huge areas where things have improved long term. Top 25 movies of the 21st Century. Artforms in decline.

Is Economics Science by Noah Smith - No one knows what a Science is. Thoeries that work (4 examples). The empirical and credibility revolutions. Why we still need structural models. Ways economics could be more scientific. Data needs to kill bad theories. Slides from Noah's talk are included and worth playing but assume familiarity with the economics profession.

===Misc:

Clojure Concurrency And Blocking With Coreasync by Eli Bendersky - Concurrent applications and blocking operations using core.async. Most of the article compares threads and go-blocks. Lots of code and well presented test results.

Optopt by Ben Kuhn - Startup options are surprisingly valuable once you factor in that you can quit of the startup does badly. A mathematical model of the value of startup options and the optimal time to quit. The ability to quit rose the option value by over 50%. The sensitivity of the analysis with respect to parameters (opportunity cost, volatility, etc).

Epistemic Spot Check: The Demon Under The Microscope by Aceso Under Glass - Biography of the man who invented sulfa drugs, the early anti-bacteria treatments which were replaced by penicillin. Interesting fact checks of various claims.

Sequential Conversion Rates by Chris Stucchio - Estimating success rates when you have noisy reporting. The article is a sketch of how the author handled such a problem in practice.

Set Theory Problem by protokol2020 - Bring down ZFC. Aleph-zero spheres and Aleph-one circles.

Connectome Specific Harmonic Waves On Lsd by Qualia Computing - Transcript and video of a talk on neuroimaging the brain on LSD. "Today thanks to the recent developments in structural neuroimaging techniques such as diffusion tensor imaging, we can trace the long-distance white matter connections in the brain. These long-distance white matter fibers (as you see in the image) connect distant parts of the brain, distant parts of the cortex."

Approval Maximizing Representations by Paul Christiano - Representing images. Manipulation representations. Iterative and compound encodings. Compressed representations. Putting it all together and bootstrapping reinforcement learning.

Travel by Ben Kuhn - Advice for traveling frequently. Sleeping on the plane and taking redeyes. Be robust. Bring extra clothes, medicine, backup chargers and things to read when delayed. Minimize stress. Buy good luggage and travel bags.

Learning To Cooperate, Compete And Communicate by Open Ai - Competitive multi-agent models are a step towards AGI. An algorithm for centralized learning and decentralized execution in multi-agent environment. Initial Research. Next Steps. Lots of visuals demonstrating the algorithm in practice.

Openai Baselines Dqn by Open Ai - "We’re open-sourcing OpenAI Baselines, our internal effort to reproduce reinforcement learning algorithms with performance on par with published results." Best practices we use for correct RL algorithm implementations. First release: DQN and three of its variants, algorithms developed by DeepMind.

Corrigibility by Paul Christiano - Paul defines the sort of AI he wants to build, he refers to such systems as "corrigible". Paul argues that a sufficiently corrigible agent will become more corrigible over time. This implies that friendly AI is not a narrow target but a broad basin of attraction. Corrigible agents prefer to build other agents that share the overseers preferences, not their own. Predicting that the overseer wants me to turn off when he hits the off-button is not complicated relative to being deceitful. Comparison with Eliezer's views.

G Reliant Skills Seem Most Susceptible To Automation by Freddie deBoer - Computers already outperform humans in g-loaded domains such as Go and Chess. Many g-loaded jobs might get automated. Jobs involving soft or people skills are resilient to automation.

Persona 5: Spoiler Free Review - Persona games are long but deeply worthwhile if you enjoy the gameplay and the story. Persona 5 is much more polished but Persona 3 has a more meaningful story and more interesting decisions. Tips for Maximum Enjoyment of Persona 5. Very few spoilers.

Sea Problem by protokol2020 - A fun problem. Measuring sea level rise.

===Podcast:

83 The Politics Of Emergency by Waking Up with Sam Harris - Fareed Zakaria. "His career as a journalist, Samuel Huntington's "clash of civilizations," political partisanship, Trump, the health of the news media, the connection between Islam and intolerance"

On Risk, Statistics, And Improving The Public Understanding Of Science by 80,000 Hours - A lifetime of communicating science. Early career advice. Getting people to intuitively understand hazards and their effect on life expectancy.

Ed Luce by Tyler Cowen - The Retreat of Western Liberalism "What a future liberalism will look like, to what extent current populism is an Anglo-American phenomenon, Modi’s India, whether Kubrick, Hitchcock, and John Lennon are overrated or underrated, and what it is like to be a speechwriter for Larry Summers."

Thomas Ricks by EconTalk - Thomas Ricks book Churchill and Orwell. Overlapping lives and the fight to preserve individual liberty.

The End Of The World According To Isis by Waking Up with Sam Harris - Graeme Wood. His experience reporting on ISIS, the myth of online recruitment, the theology of ISIS, the quality of their propaganda, the most important American recruit to the organization, the roles of Jesus and the Anti-Christ in Islamic prophecy, free speech and the ongoing threat of jihadism.

Jason Khalipa by Tim Ferriss - "8-time CrossFit Games competitor, a 3-time Team USA CrossFit member, and — among other athletic feats — he has deadlifted 550 pounds, squatted 450 pounds, and performed 64 pullups at a bodyweight of 210 pounds."

Dario Amodei, Paul Christiano & Alex Ray. - 80K hours released a detailed guide to careers in AI policy. " We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far." Transcript included.

Don Bourdreaux Emergent Order by EconTalk - "Why is it that people in large cities like Paris or New York City people sleep peacefully, unworried about whether there will be enough bread or other necessities available for purchase the next morning? No one is in charge--no bread czar. No flour czar."

Tania Lombrozo On Why We Evolved The Urge To Explain by Rational Speaking - "Research on what purpose explanation serves -- i.e., why it helps us more than our brains just running prediction algorithms. Tania and Julia also discuss whether simple explanations are more likely to be true, and why we're drawn to teleological explanations"

LessWrong analytics (February 2009 to January 2017)

19 riceissa 16 April 2017 10:45PM

Table of contents

Introduction

In January 2017, Vipul Naik obtained Google Analytics daily sessions and pageviews data for LessWrong from Kaj Sotala. Vipul asked me to write a short post giving an overview of the data, so here it is.

This post covers just the basics. Vipul and I are eager to hear thoughts on what sort of deeper analysis people are interested in; we may incorporate these ideas in future posts.

Pageviews and sessions

The data for both sessions and pageviews span from February 26, 2009 to January 3, 2017. LessWrong seems to have launched in February 2009, so this is close to the full duration for which LessWrong has existed.

Pageviews plot:

30-day rolling sum of Pageviews

Total pageviews recorded by Google Analytics for this period is 52.2 million.

Sessions plot:

30-day rolling sum of Sessions

Total sessions recorded by Google Analytics for this period is 19.7 million.

Both plots end with an upward swing, coinciding with the effort to revive LessWrong that began in late November 2016. However, as of early January 2017 (the latest period for which we have data) the scale of any recent increase in LessWrong usage is small in the context of the general decline starting in early 2012.

Top posts

The top 20 posts of all time (by total pageviews), with pageviews and unique pageviews rounded to the nearest thousand, are as follows:

Title Pageviews (thousands) Unique Pageviews (thousands)
Don’t Get Offended 681 128
How to Be Happy 551 482
How to Beat Procrastination 378 342
The Best Textbooks on Every Subject 266 233
Do you have High-Functioning Asperger’s Syndrome? 188 168
Superhero Bias 169 154
The Quantum Physics Sequence 157 130
Bayesian Judo 140 126
An Alien God 125 113
An Intuitive Explanation of Quantum Mechanics 123 106
Three Worlds Collide (0/8) 121 93
Bayes’ Theorem Illustrated (My Way) 121 112
9/26 is Petrov Day 121 115
The Baby-Eating Aliens (1/8) 109 98
The noncentral fallacy - the worst argument in the world? 107 99
Advanced Placement exam cutoffs and superficial knowledge over deep knowledge 107 94
Guessing the Teacher’s Password 102 96
The Fun Theory Sequence 102 90
Optimal Employment 102 97
Ugh fields 95 86

Note that Google Analytics reports are subject to sampling when the number of sessions is large (as it is here) so the input numbers are not exact. More details can be found in a post at LunaMetrics. This doesn’t affect the estimates for the top posts, but those wishing to work with the exported data should be aware of this.

Each post on LessWrong can have numerous URLs. In the case of posts that were renamed, a significant number of pageviews could be recorded at both the old and new URL. To take an example, the following URLs all point to lukeprog’s post “How to Be Happy”:

All that matters for identifying this particular post is that we have the substring “/lw/4su” in the URL. In the above table, I have grouped the URLs by this identifying substring and summed to get the pageview counts.

In addition, each post has two “canonical” URLs that can be obtained by clicking on the post titles: one that begins with either “/r/lesswrong/lw” or “/r/discussion/lw” and one that begins with just “/lw”. I have used the latter in linking to the posts from my table.

Source code

The data, source code used to generate the plots, as well as the Markdown source of this post are available in a GitHub Gist.

Clone the Git repository with:

git clone https://gist.github.com/cbdd400180417c689b2befbfbe2158fc.git

Further reading

Here are a few related PredictionBook predictions:

Acknowledgments

Thanks to Kaj for providing the data used in this post. Thanks to Vipul for asking around for the data, for the idea of this post, and for sponsoring my work on this post.

OpenAI makes humanity less safe

19 Benquo 03 April 2017 07:07PM

If there's anything we can do now about the risks of superintelligent AI, then OpenAI makes humanity less safe.

Once upon a time, some good people were worried about the possibility that humanity would figure out how to create a superintelligent AI before they figured out how to tell it what we wanted it to do.  If this happened, it could lead to literally destroying humanity and nearly everything we care about. This would be very bad. So they tried to warn people about the problem, and to organize efforts to solve it.

Specifically, they called for work on aligning an AI’s goals with ours - sometimes called the value alignment problem, AI control, friendly AI, or simply AI safety - before rushing ahead to increase the power of AI.

Some other good people listened. They knew they had no relevant technical expertise, but what they did have was a lot of money. So they did the one thing they could do - throw money at the problem, giving it to trusted parties to try to solve the problem. Unfortunately, the money was used to make the problem worse. This is the story of OpenAI.

Before I go on, two qualifiers:

  1. This post will be much easier to follow if you have some familiarity with the AI safety problem. For a quick summary you can read Scott Alexander’s Superintelligence FAQ. For a more comprehensive account see Nick Bostrom’s book Superintelligence.
  2. AI is an area in which even most highly informed people should have lots of uncertainty. I wouldn't be surprised if my opinion changes a lot after publishing this post, as I learn relevant information. I'm publishing this because I think this process should go on in public.

The story of OpenAI

Before OpenAI, there was DeepMind, a for-profit venture working on "deep learning” techniques. It was widely regarded as the advanced AI research organization. If any current effort was going to produce superhuman intelligence, it was DeepMind.

Elsewhere, industrialist Elon Musk was working on more concrete (and largely successful) projects to benefit humanity, like commercially viable electric cars, solar panels cheaper than ordinary roofing, cheap spaceflight with reusable rockets, and a long-run plan for a Mars colony. When he heard the arguments people like Eliezer Yudkowsky and Nick Bostrom were making about AI risk, he was persuaded that there was something to worry about - but he initially thought a Mars colony might save us. But when DeepMind’s head, Demis Hassabis, pointed out that this wasn't far enough to escape the reach of a true superintelligence, he decided he had to do something about it:

Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. […] Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.

Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.

[…]

Musk is not going gently. He plans on fighting this with every fiber of his carbon-based being. Musk and Altman have founded OpenAI, a billion-dollar nonprofit company, to work for safer artificial intelligence.

OpenAI’s primary strategy is to hire top AI researchers to do cutting-edge AI capacity research and publish the results, in order to ensure widespread access. Some of this involves making sure AI does what you meant it to do, which is a form of the value alignment problem mentioned above.

Intelligence and superintelligence

No one knows exactly what research will result in the creation of a general intelligence that can do anything a human can, much less a superintelligence - otherwise we’d already know how to build one. Some AI research is clearly not on the path towards superintelligence - for instance, applying known techniques to new fields. Other AI research is more general, and might plausibly be making progress towards a superintelligence. It could be that the sort of research DeepMind and OpenAI are working on is directly relevant to building a superintelligence, or it could be that their methods will tap out long before then. These are different scenarios, and need to be evaluated separately.

What if OpenAI and DeepMind are working on problems relevant to superintelligence?

If OpenAI is working on things that are directly relevant to the creation of a superintelligence, then its very existence makes an arms race with DeepMind more likely. This is really bad! Moreover, sharing results openly makes it easier for other institutions or individuals, who may care less about safety, to make progress on building a superintelligence.

Arms races are dangerous

One thing nearly everyone thinking seriously about the AI problem agrees on, is that an arms race towards superintelligence would be very bad news. The main problem occurs in what is called a “fast takeoff” scenario. If AI progress is smooth and gradual even past the point of human-level AI, then we may have plenty of time to correct any mistakes we make. But if there’s some threshold beyond which an AI would be able to improve itself faster than we could possibly keep up with, then we only get one chance to do it right.

AI value alignment is hard, and AI capacity is likely to be easier, so anything that causes an AI team to rush makes our chances substantially worse; if they get safety even slightly wrong but get capacity right enough, we may all end up dead. But you’re worried that the other team will unleash a potentially dangerous superintelligence first, then you might be willing to skip some steps on safety to preempt them. But they, having more reason to trust themselves than you, might notice that you’re rushing ahead, get worried that your team will destroy the world, and rush their (probably safe but they’re not sure) AI into existence.

OpenAI promotes competition

DeepMind used to be the standout AI research organization. With a comfortable lead on everyone else, they would be able to afford to take their time to check their work if they thought they were on the verge of doing something really dangerous. But OpenAI is now widely regarded as a credible close competitor. However dangerous you think DeepMind might have been in the absence of an arms race dynamic, this makes them more dangerous, not less. Moreover, by sharing their results, they are making it easier to create other close competitors to DeepMind, some of whom may not be so committed to AI safety.

We at least know that DeepMind, like OpenAI, has put some resources into safety research. What about the unknown people or organizations who might leverage AI capacity research published by OpenAI?

For more on how openly sharing technology with extreme destructive potential might be extremely harmful, see Scott Alexander’s Should AI be Open?, and Nick Bostrom’s Strategic Implications of Openness in AI Development.

What if OpenAI and DeepMind are not working on problems relevant to superintelligence?

Suppose OpenAI and DeepMind are largely not working on problems highly relevant to superintelligence. (Personally I consider this the more likely scenario.) By portraying short-run AI capacity work as a way to get to safe superintelligence, OpenAI’s existence diverts attention and resources from things actually focused on the problem of superintelligence value alignment, such as MIRI or FHI.

I suspect that in the long-run this will make it harder to get funding for long-run AI safety organizations. The Open Philanthropy Project just made its largest grant ever, to Open AI, to buy a seat on OpenAI’s board for Open Philanthropy Project executive director Holden Karnofsky. This is larger than their recent grants to MIRI, FHI, FLI, and the Center for Human-Compatible AI all together.

But the problem is not just money - it’s time and attention. The Open Philanthropy Project doesn’t think OpenAI is underfunded, and could do more good with the extra money. Instead, it seems to think that Holden can be a good influence on OpenAI. This means that of the time he's allocating to AI safety, a fair amount has been diverted to OpenAI.

This may also make it harder for organizations specializing in the sort of long-run AI alignment problems that don't have immediate applications to attract top talent. People who hear about AI safety research and are persuaded to look into it will have a harder time finding direct efforts to solve key long-run problems, since an organization focused on increasing short-run AI capacity will dominate AI safety's public image.

Why do good inputs turn bad?

OpenAI was founded by people trying to do good, and has hired some very good and highly talented people. It seems to be doing genuinely good capacity research. To the extent to which this is not dangerously close to superintelligence, it’s better to share this sort of thing than not – they could create a huge positive externality. They could construct a fantastic public good. Making the world richer in a way that widely distributes the gains is very, very good.

Separately, many people at OpenAI seem genuinely concerned about AI safety, want to prevent disaster, and have done real work to promote long-run AI safety research. For instance, my former housemate Paul Christiano, who is one of the most careful and insightful AI safety thinkers I know of, is currently employed at OpenAI. He is still doing AI safety work – for instance, he coauthored Concrete Problems in AI Safety with, among others, Dario Amodei, another OpenAI researcher.

Unfortunately, I don’t see how those two things make sense jointly in the same organization. I’ve talked with a lot of people about this in the AI risk community, and they’ve often attempted to steelman the case for OpenAI, but I haven’t found anyone willing to claim, as their own opinion, that OpenAI as conceived was a good idea. It doesn’t make sense to anyone, if you’re worried at all about the long-run AI alignment problem.

Something very puzzling is going on here. Good people tried to spend money on addressing an important problem, but somehow the money got spent on the thing most likely to make that exact problem worse. Whatever is going on here, it seems important to understand if you want to use your money to better the world.

(Cross-posted at my personal blog.)

Background Reading: The Real Hufflepuff Sequence Was The Posts We Made Along The Way

18 Raemon 26 April 2017 06:15PM

This is the fourth post of the Project Hufflepuff sequence. Previous posts:


Epistemic Status: Tries to get away with making nuanced points about social reality by using cute graphics of geometric objects. All models are wrong. Some models are useful. 

Traditionally, when nerds try to understand social systems and fix the obvious problems in them, they end up looking something like this:

Social dynamics is hard to understand with your system 2 (i.e. deliberative/logical) brain. There's a lot of subtle nuances going on, and typically, nerds tend to see the obvious stuff, maybe go one or two levels deeper than the obvious stuff, and miss that it's in fact 4+ levels deep and it's happening in realtime faster than you can deliberate. Human brains are pretty good (most of the time) at responding to the nuances intuitively. But in the rationality community, we've self-selected for a lot of people who:

  1. Don't really trust things that they can't understand fully with their system 2 brain. 
  2. Tend not to be as naturally skilled at intuitive mainstream social styles. 
  3. Are trying to accomplish things that mainstream social interactions aren't designed to accomplish (i.e. thinking deeply and clearly on a regular basis).
This post is an overview of essays that rationalist-types have written over the past several years, that I think add up to a "secret sequence" exploring why social dynamics are hard, and why they are important to get right. This may useful both to understand some previous attempts by the rationality community to change social dynamics on purpose, as well as to current endeavors to improve things.

(Note: I occasionally have words in [brackets], where I think original jargon was pointing in a misleading direction and I think it's worth changing)

To start with, a word of caution:

Armchair sociolology can be harmful - Ozy's post is pertinent - most essays below fall into the category of "armchair sociology", and attempts by nerds to understand and articulate social-dynamics that they aren't actually that good at. Several times when an outsider has looked in at rationalist attempts to understood human interaction they've said "Oh my god, this is the blind leading the blind", and often that seemed to me like a fair assessment.

I think all the essays that follow are useful, and are pointing at something real. But taken individually, they're kinda like the blind men groping at the elephant, each coming away with the distinct impression an elephant is like a snake, tree, a boulder, depending on which aspect they're looking at.

[Fake Edit: Ozy informs me that they were specifically warning against amateur sociology and not psychology. I think the idea still roughly applies]

Part 1. Cultural Assumptions of Trust

Guess [Infer Culture], Ask Culture, and Tell [Reveal] Culture (Malcolm Ocean)

 

Different people have different ways of articulating their needs and asking for help. Different ways of asking require different assumptions of trust. If people are bringing different expectations of trust into an interaction, they may feel that that trust is being violated, which can seem rude, passive aggressive or oppressive.

 

I'm listing this article, instead of numerous others about Ask/Guess/Tell, because I think: a) Malcolm does a good job of explaining how all the cultures work, and b) I think his presentation of Reveal culture is a good, clearer upgrade for Brienne's Tell culture, and I'm a bit sad it didn't seem to make it into the zeitgeist yet. 

I also like the suggestion to call Guess Culture "Infer Culture" (implying a bit more about what skills the culture actually emphasizes).

Guess Culture Screens for Trying to Cooperate (Ben Hoffman)

Rationality folk (and more generally, nerds), tend to prefer explicit communication over implicit, and generally see Guess culture as strictly inferior to Ask culture once you've learned to assert yourself. 

But there is something Guess culture does which Ask culture doesn't, which is give you evidence of how much people understand you and are trying to cooperate. Guess cultures filters for people who have either invested effort into understanding your culture overall, or people who are good at inferring your own wants. 

Sharp Culture and Soft Culture (Sam Rosen)

[WARNING: It turned out lots of people thought this meant something different than what I thought it meant. Some people thought it meant soft culture didn't involve giving people feedback or criticism at all. I don't think Soft/Sharp are totally-naturally clusters in the first place, and the distinction I'm interested in (as applies to rationality-culture), is how you give feedback.

(i.e. "Dude, your art sucks. It has no perspective." vs "oh, cool. Nice colors. For the next drawing, you might try incorporating perspective", as a simplified example)]

Somewhat orthogonal to Infer/Ask/Reveal culture is "Soft" vs "Sharp" culture. Sharp culture tends to have more biting humor, ribbing each other, and criticism. Soft culture tends to value kindness and social harmony more. Sam says that Sharp culture "values honesty more." Robby Bensinger counters in the comments: "My own experience is that sharp culture makes it more OK to be open about certain things (e.g., anger, disgust, power disparities, disagreements), but less OK to be open about other things (e.g., weakness, pain, fear, loneliness, things that are true but not funny or provocative or badass)."

Handshakes, Hi, and What's New: What's Going on With Small Talk?  (Ben Hoffman)

Small talk often sounds nonsensical to literally-minded people, but it serves a fairly important function: giving people a structured path to figure out how much time/sympathy/interest they want to give each other. And even when the answer is "not much", it still is, significantly, nonzero - you regard each other as persons, not faceless strangers.

Personhood [Social Interfaces?]  (Kevin Simler)

This essays gets a lot of mixed reactions, much of which I think has to do with its use of the word "Person." The essay is aimed at explaining how people end up treating each other as persons or nonpersons, without making any kind of judgement about it. This includes noting some things human tend to do that you might consider horrible.

Like many grand theories, I think it overstates it's case and ignores some places where the explanation breaks down, but I think it points at a useful concept which is summarized by this adorable graphic:

The essay uses the word "personhood". In the original context, this was useful: it gets at why cultures develop, why it matters whether you're able to demonstrate reliably, trust, etc. It helps explain outgroups and xenophobia: outsiders do not share your social norms, so you can't reliably interact with them, and it's easier to think of them as non-people than try to figure out how to have positive interactions.

But what I'm most interested in is "how can we use this to make it easier for groups with different norms to interact with each other"? And for that, I think using the word "personhood" makes it way more likely to veer into judging each other for having different preferences and communication styles.

What makes a person is... arbitrary, but not fully arbitrary. 

Rationalist culture tends to attract people who prefer a particular style of “social interface”, often favoring explicit communication and discussing ideas in extreme detail. There's a lot of value to those things, but they have some problems:

a) this social interface does NOT mesh well with the rest of world (this is a problem if you have any goals that involve the rest of the world)

b) this goal does not uniformly mesh well with all people interested in and valuable to the rationality community.

I don't actually think it's possible to develop a set of assumptions that fit everyone's needs. But I do think it's possible to develop better tools for navigating different social contexts. I think it may be possible both to tweak sets-of-norms so that they mesh better together, or at least when they bump into each other, there's greater awareness of what's happening and people's default response is "oh, we seem to have different preferences, let's figure out how 

Maybe we can end up with something that looks kinda like this:

Against Being Against or For Tell Culture  (Brienne Yudkowsky)

Having said a bunch of things about different cultural interfaces, I think this post by Brienne is really important, and highlights the end goal of all of this.

"Cultures" are a crutch. They are there to help you get your bearings. They're better than nothing. But they are not a substitute for actually having the skills needed to navigate arbitrary social situations as they come up so you can achieve whatever it is you want to achieve. 

To master communication, you can't just be like, "I prefer Tell Culture, which is better than Guess Culture, so my disabilities in Guess Culture are therefore justified." Justified shmustified, you're still missing an arm.

My advice to you - my request of you, even - if you find yourself fueling these debates [about which culture is better], is to (for the love of god) move on. If you've already applied cognitive first aid, you've created an affordance for further advancement. Using even more tourniquettes doesn't help.

Part 2. Game Theory, Recursion and Trust

(or, "Social dynamics are really complicated, you are not getting away with the things you think you are getting away with, stop trying to be clever, manipulative, act-utilitarian or naive-consequentialist without actually understanding what is going on")

Grokking Newcomb's Problem and Deserving Trust (Andrew Critch)

Critch argues why it is not just "morally wrong", but an intellectual mistake, to violate someone’s trust (even when you don’t expect any repercussions in the future).

When someone decides whether to trust you (say, giving you a huge opportunity), on the expectation that you’ll refrain from exploiting them, they’ve already run a low-grade simulation of you in their imagination. And the thing is that you don’t know whether you’re in a simulation or not when you make the decision whether to repay them. 

Some people argue “but I can tell that I’m a conscious being, and they aren’t a literal super-intelligent AI, they’re just a human. They can’t possibly be simulating me in this high fidelity. I must be real.” This is true. But their simulation of you is not based on your thoughts, it’s based on your actions. It’s really hard to fake. 

One way to think about it, not expounded on in the article: Yes, if you pause to think about it you can notice that you’re conscious and probably not being simulated in their imagination. But by the time you notice that, it’s too late. People build up models of each other all the time, based on very subtle cues such as how fast you respond to something. Conscious you knows that you’re conscious. But their decision of whether to trust you was based off the half-second it took for unconscious you to reply to questions like “Hey, do you think you handle  Project X while I’m away?”

The best way to convince people you’re trustworthy is to actually be trustworthy.

You May Not Believe In Guess[Infer] Culture But It Believes In You (Scott Alexander)

This is short enough to just include the whole thing:

Consider an "ask culture" where employees consider themselves totally allowed to say "no" without repercussions. The boss would prefer people work unpaid overtime so ey gets more work done without having to pay anything, so ey asks everyone. Most people say no, because they hate unpaid overtime. The only people who agree will be those who really love the company or their job - they end up looking really good. More and more workers realize the value of lying and agreeing to work unpaid overtime so the boss thinks they really love the company. Eventually, the few workers who continue refusing look really bad, like they're the only ones who aren't team players, and they grudgingly accept.

Only now the boss notices that the employees hate their jobs and hate the boss. The boss decides to only ask employees if they will work unpaid overtime when it's absolutely necessary. The ask culture has become a guess culture.

How this applies to friendship is left as an exercise for the reader.

The Social Substrate (Lahwran)

A fairly in depth look into how common knowledge, signaling, newcomb-like problems and recursive modeling of each other interact to produce "regular social interaction."

I think there's a lot of interesting stuff here - I'm not sure if it's exactly accurate but it points in directions that seem useful. But I actually think the most important takeaway is the warning at the beginning:

WARNING: An easy instinct, on learning these things, is to try to become more complicated yourself, to deal with the complicated territory. However, my primary conclusion is "simplify, simplify, simplify": try to make fewer decisions that depend on other people's state of mind. You can see more about why and how in the posts in the "Related" section, at the bottom.

When you're trying to make decisions about people, you're reading a lot of subtle cues off them to get a sense of how you feel about that. When you [generic person you, not necessarily you in particular] can tell someone is making complex decisions based on game theory and trying to model all of this explicitly, it a) often comes across as a bit off, and b) even if it doesn't, you still have to invest a lot of cognitive resources figuring out how they are modeling things and whether they are actually doing a good job or missing key insights or subtle cues. The result can be draining, and it can output a general response of "ugh, something about this feels untrustworthy."

Whereas when people are able to cache this knowledge down into a system-1 level, you're able to execute a simpler algorithm that looks more like "just try to be a good trustworthy person", that people can easily read off your facial expression, and which reduces overall cognitive burden.

System 1 and System 2 Morality  (Sophie Grouchy)

There’s some confusion over what “moral” means, because there’s two kinds of morality: 

 - System 1 morality is noticing-in-realtime when people need help, or when you’re being an asshole, and then doing something about it. 

 - System 2 morality is when you have a complex problem and a lot of time to think about it. 

System 1 moralists will pay back Parfit’s Hitchhiker because doing otherwise would be being a jerk. System 2 moralists invent Timeless [Functional?] decision theory. You want a lot of people with System 2 morality in the world, trying to fix complex problems. You want people with System 1 morality in your social circle.

The person who wrote this post eventually left the rationality community, in part due to frustration due to people constantly violating small boundaries that seemed pretty obvious (things in the vein of “if you’re going to be 2 hours late, text me so I don’t have to sit around waiting for you.”)

Final Remarks

I want to reiterate - all models are wrong. Some models are useful. The most important takeaway from this is not that any particular one of these perspectives is true, but that social dynamics has a lot of stuff going on that is more complicated than you're naively imagining, and that this stuff is important enough to put the time into getting right.

What exactly is the "Rationality Community?"

18 Raemon 09 April 2017 12:11AM

This is the second post in the Project Hufflepuff sequence. It’s also probably the most standalone and relevant to other interests. The introduction post is here.


The Berkeley Hufflepuff Unconference is on April 28th. RSVPing on this Facebook Event is helpful, as is filling out this form.



 

I used to use the phrase "Rationality Community" to mean three different things. Now I only use it to mean two different things, which is... well, a mild improvement at least. In practice, I was lumping a lot of people together, many of whom neither wanted to get lumped together nor had much in common.

 

As Project Hufflepuff took shape, I thought a lot about who I was trying to help and why. And I decided the relevant part of the world looks something like this:

I. The Rationalsphere

The Rationalsphere is defined in the broadest possible sense - a loose cluster of overlapping interest groups, communities and individuals. It includes people who disagree wildly with each other - some who are radically opposed to one another. It includes people who don’t identify as “rationalist” or even as especially interested in “rationality” - but who interact with each other on a semi-regular basis. I think it's useful to be able to look at that ecosystem as a whole, and talk about it without bringing in implications of community.

continue reading »

A Month's Worth of Rational Posts - Feedback on my Rationality Feed.

17 deluks917 15 May 2017 02:21PM

For the last two months I have been publishing a feed of rationalist articles. Oriignally the feed was only published on the SSC discord channel SSC Discord (Be charitable, kind and don't treat the place like 4chan). For the last few days I have also been publishing it on my blog deluks917.wordpress.com. I categorize the links and include a brief excerpt, review, and/or teaser. If you would like to see an exampel in practice just check today's post. The average number of links per day, in the last month, has been six. But this number has been higher recently. I have not missed a single day since I started, so I think its likely I will continue doing this. The list of blogs I check is located here: List of Blogs

I am looking for some feedback. At the bottom of this post I am  including a month's worth of posts categorized using the current system. Posts are not nescessarily in any particular order since my categorization system has not been constant over time. Lots of posts were moved around by hand. 

1 -  Should I share the feed the results somewhere other than SSC-discord + my blog? Mindlevelup suggested I write up a weekly roundup. I could share such a roundup via some on lesswrong and SSC. I would estimate the expected number of links in such a psot to be around 35. Links would be posted in chronoligcal order within categories. Alternatively I could share such a post every two weeks. Its also possible to have a mailing list but I currently find this less promising. 

2 - Do the categories make a reasonable amount of sense? What tweaks would you make. I have ocnsidered mixing some of the smaller categories (Math and CS, Amusement into "misc"). 

3 - Are there any blogs I should include/drop from the feed. For example I have been considering dropping ribbonfarm. The highest priority is to get the content thats directly about instrumental/epsitemic rationality. The bar is higher for politics and culture_war. I should note I am not going to personally include any blog without an RSS feed. 

4 - Is anyone willing to write a "Best of rationalist tumblr" post. If I write a weekly/bi-weekly round up I could combine it with an equivalent "best of tumblr" post. The tumblr post would not have to be daily, just weekly or every other week. We could take turns posting the resulting combination to lesswrong/SSC and collecting the juicy karma. However its worth noting that SSC-reddit has some controls on culture_war (outside of the CW thread). Since we want to post to r/SSC we need to keep the density of culture_war to reasonable levels. Lesswrong also has some anti-cw norms.

=== Last Month's Rationality Content === 

**Scott**

http://slatestarcodex.com/2017/05/11/silicon-valley-a-reality-check/ - What a person finds in Silicon Valley mirrors the seeker.

http://slatestarcodex.com/2017/05/09/links-517-rip-van-linkle/ - Links.

http://slatestarcodex.com/2017/04/11/sacred-principles-as-exhaustible-resources/ - Don't deplete the free speech commons.

http://slatestarcodex.com/2017/04/12/clarification-to-sacred-principles-as-exhaustible-resources/  - Clarifications and caveats on Scott's last article on free speech and sacred values.

http://slatestarcodex.com/2017/04/13/chametz/ - A Jewish Vampire Story

http://slatestarcodex.com/2017/04/17/learning-to-love-scientific-consensus/ - Scott Critiques a list of 10 maverick inventors. He then reconsiders his previous science skepticism.

http://slatestarcodex.com/2017/04/21/ssc-journal-club-childhood-trauma-and-cognition/ - A new study challenges the idea that child abuse reduces brain function.

http://slatestarcodex.com/2017/04/25/book-review-the-hungry-brain/ - Scott gives a favorable view of the "establishment" view of nutrition.

http://slatestarcodex.com/2017/04/26/anorexia-and-metabolic-set-point/ - Short Post (for Scott)

https://slatestarscratchpad.tumblr.com/post/160028275801/slatestarscratchpad-wayward-sidekick-you - Scott discusses engaging with ideas you find harmful. He also discusses his attitude toward making his blog as friendly as possible. [culture_war]

http://slatestarcodex.com/2017/05/01/neutral-vs-conservative-the-eternal-struggle/ - Formally neutral institutions have a liberal bias. Conservatives react by seceding and forming their own institutions. The end result is bad for society. [Culture War]

http://slatestarcodex.com/2017/05/04/getting-high-on-your-own-supply/ - "If you optimize for the epistemic culture that’s best for getting elected, but that culture isn’t also the best for running a party or governing a nation, then the fact that your culture affects your elites as well becomes a really big problem." Short for Scott.

http://slatestarcodex.com/2017/05/07/ot75-the-comment-king/ - bi-weekly visible open thread.

http://unsongbook.com/postscript-1-wrap-parties-fan-music/ - Final chapter of Unsong goes up approximately 8pm on Sunday. Unsong will have an epilogue will will go up on Wednesday. Wrap party details. (I will be at the wrap party on sunday).

http://unsongbook.com/book-iv-kings/ - "Somebody had to, no one would / I tried to do the best I could / And now it’s done, and now they can’t ignore us / And even though it all went wrong / I’ll stand against the whole unsong / With nothing on my tongue but HaMephorash"

http://unsongbook.com/chapter-71-but-for-another-gives-its-ease/ - Penultimate chapter of Unsong.

http://unsongbook.com/chapter-70-nor-for-itself-hath-any-care/ - Newest Chapter.

http://unsongbook.com/authors-note-10-hamephorash-hamephorash-party/ - Final Chapter goes up may 14. Bay Area Reading party announced.

http://unsongbook.com/chapter-69-love-seeketh-not-itself-to-please/ - Newest Chapter.

http://unsongbook.com/chapter-68-puts-all-heaven-in-a-rage/ - Newest Chapter.

**Rationalism**

http://lesswrong.com/r/discussion/lw/ozz/gearsness_of_understanding/ - "I want to point at a property of models that isn't about what they're modeling. It interacts with the clarity of what they're modeling, but only in the same way that smudged lines in a roadmap interact with the clarity of the roadmap. This property is how deterministically interconnected the variables of the model are.". The theory is applied to multiple explicit examples.

https://thepdv.wordpress.com/2017/05/11/how-i-use-beeminder/ - Short but gives details. Beeminder is the only productivity system that worked for the author.

https://putanumonit.com/2017/05/09/time-well-spent/ - Akrasia and procrastination. A review of some of the rationalist thinking on the topic. Jacob's personal take and his system for tracking his productivity.

https://putanumonit.com/2017/05/09/time-well-spent/ - Akrasia and procrastination. A review of some of the rationalist thinking on the topic. Jacob's personal take and his system for tracking his productivity.

http://kajsotala.fi/2017/05/cognitive-core-systems-explaining-intuitions-behind-belief-in-souls-free-will-and-creation-myths/ - Description of four core systems human, and other animals, are born with. An explanation of why these systems lead to belief in souls. Short.

https://mindlevelup.wordpress.com/2017/05/06/taking-criticism/ - Reframing criticism so that it makes sense to the author (who is bad at taking criticism). A Q&A between the author and himself.

http://lesswrong.com/r/discussion/lw/oz1/soft_skills_for_running_meetups_for_beginners/ - Concrete advice for running meetups. Not especially focused on beginning organizers. Written by the person who organized Solstice.

http://effective-altruism.com/ea/19t/mental_health_resource_for_ea_community/ - A breakdown of the most useful information about Mania and Psychosis. Extremely practical advice. Julia Wise.

http://bearlamp.com.au/working-with-multiple-problems-at-once - Problems add up and you run out of time. How do you get out? Very practical.

http://agentyduck.blogspot.com/2017/05/creativity-taps.html - Practical ideas for exercising creativity.

http://lesswrong.com/r/discussion/lw/oyk/acting_on_your_intended_preferences_what_does/ - What does it look like in practice to pursue your goals. A series of practical questions to ask your. Links to a previous series of blog post are included.

https://thingofthings.wordpress.com/2017/05/03/why-do-all-the-rationalists-live-in-the-bay-area/ - Benefits of living in the Bay. The Bay is a top place for software engineers even accounting for cost of living, Rationalist institutions are in the Bay, there are social and economic benefits to being around other community members.

https://qualiacomputing.com/2017/05/04/the-most-important-philosophical-question/ - “Is happiness a spiritual trick, or is spirituality a happiness trick?”

http://particularvirtue.blogspot.com/2017/05/how-to-build-community-full-of-lonely.html - Why so many rationalists feel lonely and concrete suggestions for improving social groups. Advice is given to people who are popular, lonely or organizers. Very practical.

https://hivewired.wordpress.com/2017/05/07/announcing-entropycon-12017/ - We beat smallpox, we will beat death, we can try to beat entropy. A humorous mantra against nihilism.

https://mindlevelup.wordpress.com/2017/04/30/there-is-no-akrasia/ - The author argues that akrasia isn't a "thing" its a "sorta-coherent concept". He also argues that "akrasia" is not a useful concept and can be harmful.

http://bearlamp.com.au/experiments-iterations-and-the-scientific-method/ - A Graph of the scientific method in practice. The author works through his quantified self in practice and discusses his experiences.

https://everythingstudies.wordpress.com/2017/04/29/all-the-worlds-a-trading-zone/ - Cultures with different norms and languages can interact successfully.

http://kajsotala.fi/2017/04/relationship-compatibility-as-patterns-of-emotional-association/ - What is relationship "chemistry"?

http://lesswrong.com/lw/oyc/nate_soares_replacing_guilt_series_compiled_in/ - Ebook. 45 blog posts on replacing guilt and shame with a stronger motivation.

http://mindingourway.com/assuming-positive-intent/ - "If you're actively working hard to make the world a better place, then we're on the same team. If you're committed to letting evidence and reason guide your actions, then I consider you friends, comrades in arms, and kin."

http://bearlamp.com.au/quantified-self-tracking-with-a-form/ - Practical advice based on Elo's personal experience.

http://lesswrong.com/r/discussion/lw/ovc/background_reading_the_real_hufflepuff_sequence/ - Links and Descriptions of rationalist articles about group norms and dynamics.

https://everythingstudies.wordpress.com/2017/04/24/people-are-different/ - "We need to understand, accept and respect differences, that one size does not fit all, but to (and from) each their own."

http://bearlamp.com.au/yak-shaving-2/ - "A question worth asking is whether you are in your life at present causing a build up of problems, a decrease of problems, or roughly keeping them about the same level."

http://lesswrong.com/r/discussion/lw/oxk/i_updated_the_list_of_rationalist_blogs_on_the/ - Up to date list of rationalist blogs.

https://aellagirl.com/2017/05/02/internet-communities-otters-vs-possums/ - Possums: people who like a specific culture. Otters are people who like most cultures. What happens when the percentage of otters in a community increases?

https://aellagirl.com/2017/04/24/how-i-lost-my-faith/ - "People sometimes ask the question of why it took so long. Really I’m amazed that it happened at all. Before we even approach the aspect of “good arguments against religion”, you have to understand exactly how much is sacrificed by the loss of religion."

http://particularvirtue.blogspot.com/2017/04/on-social-spaces.html - Twitter, Tumblr, Facebook etc. PV responds to Zvi's articles about facebook. PV defends tumblr and facebook and has some criticisms of twitter. Several examples are given where ratioanalist groups tried to change platforms.

http://www.overcomingbias.com/2017/04/superhumans-live-among-us.html - Some human polymaths really are superhuman. But they don't have the track record to prove it.

https://thezvi.wordpress.com/2017/04/22/against-facebook/ - Sections: 1. A model breaking down how Facebook actually works. 2. An experiment with my News Feed. 3. Living with the Algorithm. 4. See First, Facebook’s most friendly feature. 5. Facebook is an evil monopolistic pariah Moloch. 6. Facebook is bad for you and Facebook is ruining your life. 7. Facebook is destroying discourse and the public record. 8. Facebook is out to get you.

https://thezvi.wordpress.com/2017/04/22/against-facebook-comparison-to-alternatives-and-call-to-action/ - Zvi's advice for managing your information streams and discussion platforms. Facebook can mostly be replaced.

https://rationalconspiracy.com/2017/04/22/moving-to-the-bay-area/ - Downsides of the Bay. Extensively sourced. Cost of living, traffic, public transit, crime, cleanliness.

https://nintil.com/2017/04/18/still-not-a-zombie-replies-to-commenters/ - Thoughts on consciousness and identity.

http://bearlamp.com.au/an-inquiry-into-memory-of-humans/ - The reader is asked to try various interesting memory exercises.

https://www.jefftk.com/p/how-to-make-housing-cheaper - 9 ways to make housing cheaper.

http://lesswrong.com/r/discussion/lw/owb/straw_hufflepuffs_and_lone_heroes/ - Should Harry have joined Hufflepuff in HPMOR? Harry had reasons to be a lone hero, do you?

http://lesswrong.com/lw/owa/lesswrong_analytics_february_2009_to_january_2017/ - Activity graphs of lesswrong over time, which posts had the most views, links to source code and further reading.

https://thezvi.wordpress.com/2017/04/23/help-us-find-your-blog-and-others/ - Zvi will read a post from your blog and consider adding you to his RSS feed.

https://thingofthings.wordpress.com/2017/04/11/book-post-for-march/ - Books on parenting.

https://boardgamesandrationality.wordpress.com/2017/04/24/first-blog-post/ - Dealing With Secret Information in boardgames and real life.

http://www.overcomingbias.com/2017/04/mormon-transhumanists.html - The relationship between religious community and technological change. Long for Overcoming Bias.

https://putanumonit.com/2017/04/15/bad-religion/ - "Rationality is a really unsatisfactory religion. But it’s a great life hack."

https://thezvi.wordpress.com/2017/04/12/escalator-action/ - Should we walk on elevator?

https://putanumonit.com/2017/04/21/book-review-too-like-the-lightning/ - The world of Jacob's dreams, thought on AI, a book review.

**EA**

http://effective-altruism.com/ea/19y/understanding_charity_evaluation/ - A detailed breakdown of how charity evaluation works in practice. Openly somewhat speculative.

http://blog.givewell.org/2017/05/11/update-on-our-views-on-cataract-surgery/ - previously Givewell had unsuccessfully tried to find recommendable cataract surgery charities. The biggest issues were “room for funding” and “lack of high quality monitoring data”. However they believe that cataract surgery is a promising intervention and they are doing more analysis.

https://80000hours.org/2017/05/how-much-do-hedge-fund-traders-earn/ - Detailed report on career trajectories and earnings. "We found that junior traders typically earn $300k – $3m per year, and it’s possible to reach these roles in 4 – 8 years."

https://www.givedirectly.org/blog-post?id=7612753271623522521 - 8 News links about GiveDirectly, Basic Income and cash transfer.

https://80000hours.org/2017/05/most-people-report-believing-its-incredibly-cheap-to-save-lives-in-the-developing-world/ - Details of a study on how Much Americans think it costs to save a life. Discussion of why people gave such optimistic answers. "It turns out that most Americans believe a child can be prevented from dying of preventable diseases for very little – less than $100."

https://www.thelifeyoucansave.org/Blog/ID/1355/Are-Giving-Games-a-Better-Way-to-Teach-Philanthropy - Literature review on "philanthropy games". Covers both traditional student philanthropy courses and the much shorter "giving game".

https://www.givedirectly.org/blog-post?id=8255610968755843534 - Links to news stories about Effective Altruism

http://benjaminrosshoffman.com/an-openai-board-seat-is-surprisingly-expensive/ - " In exchange for a board seat, the Open Philanthropy Project is aligning itself socially with OpenAI, by taking the position of a material supporter of the project."

https://www.givedirectly.org/blog-post?id=5010525406506746433 - Links to News Articles about Give Directly, Basic Income and Cash Transfer.

https://www.givedirectly.org/blog-post?id=121797500310578692 - Report on a program to give cash to coffee farmers in eastern Uganda.

http://effective-altruism.com/ea/19d/update_on_effective_altruism_funds/ - Details from the first round of funding, community feedback, Mistakes and Updates.

http://lesswrong.com/r/discussion/lw/ox4/effective_altruism_is_selfrecommending/ - Open Philanthropy project has a closed validation loop. A detailed timeline of GiveWell/Open-Philanthropy is given and many untested assumptions are pointed out. A conceptual connection is made to confidence games.

http://lesswrong.com/r/discussion/lw/oxd/the_2017_effective_altruism_survey_please_take/ - Take the survey :)

https://www.givingwhatwecan.org/post/2017/04/career-of-professor-alan-fenwick/ - Retrospective on the career of the director of the Schistosomiasis Institute.

http://www.openphilanthropy.org/blog/new-report-early-field-growth - The history of attempts to grow new fields of research or advocacy.

https://www.givedirectly.org/blog-post?id=4406309858976986548 - news links about GiveDirectly, Basic Income and Cash Transfers

https://intelligence.org/2017/04/30/2017-updates-and-strategy/ - Outreach, expansion, detailed research plan, state of the AI-risk community.

http://blog.givewell.org/2017/05/04/why-givewell-is-partnering-with-idinsight/ - IDinsight is an international NGO that aims to help its clients develop and use rigorous evidence to improve social impact. Summary, Background, goals, initial plans.

https://www.thelifeyoucansave.org/Blog/ID/1354/A-Shift-in-Priorities-at-the-Giving-Game-Project - Finding sustainable funding, Providing measurable outcomes, improving follow ups with participants.

https://80000hours.org/2017/05/most-people-report-believing-its-incredibly-cheap-to-save-lives-in-the-developing-world/ - Details of a study on how Much Americans think it costs to save a life. Discussion of why people gave such optimistic answers. "It turns out that most Americans believe a child can be prevented from dying of preventable diseases for very little – less than $100."

https://www.thelifeyoucansave.org/Blog/ID/1355/Are-Giving-Games-a-Better-Way-to-Teach-Philanthropy - Literature review on "philanthropy games". Covers both traditional student philanthropy courses and the much shorter "giving game".

http://www.openphilanthropy.org/blog/why-are-us-corporate-cage-free-campaigns-succeeding - The article contains a timeline of cagefree reform. Some background reasons given are: Undercover investigations, College engagement, Corporate engagement, Ballot measures, Gestation crate pledges, European precedent.

https://www.givingwhatwecan.org/post/2017/04/a-successor-to-the-giving-what-we-can-trust/ - The Giving What we Can Trust has joined with the "Effective Altruism Funds" (run by the Center for Effective Altruism).

http://lesswrong.com/r/discussion/lw/oyf/bad_intent_is_a_behavior_not_a_feeling/ - Response to Nate Soares, application to EA. "If you try to control others’ actions, and don’t limit yourself to doing that by honestly informing them, then you’ll end up with a strategy that distorts the truth, whether or not you meant to."

**Ai_risk**

http://effective-altruism.com/ea/19c/intro_to_caring_about_ai_alignment_as_an_ea_cause/ - By Nate Soares. A modified transcript of the talk he gave at Google on the problem of Ai Alignment.

http://lukemuehlhauser.com/monkey-classification-errors/ , http://lukemuehlhauser.com/adversarial-examples-for-pigeons/ - Adversarial examples for monkeys and pigeons respectively.

https://intelligence.org/2017/05/10/may-2017-newsletter/ - Research updates, MIRI hiring, General news links about AI

https://intelligence.org/2017/04/12/ensuring/ - Nate Soares gives a talk at Google about "Ensuring smarter-than-human intelligence has a positive outcome". An outline of the talk is included.

https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/ - An extended discussion of Soares's latest paper "Cheating Death in Damascus".

**Research**

https://everythingstudies.wordpress.com/2017/05/12/the-eurovision-song-contest-taste-landscape/ - Analysis of Voting patterns in the Eurovision Contest. Alliances and voting Blocs are analyzed in depth.

https://srconstantin.wordpress.com/2017/05/12/do-pineal-gland-extracts-promote-longevity-well-maybe/ - Analysis of hormonal systems and their effect on metabolism and longevity.

https://acesounderglass.com/2017/05/11/an-opportunity-to-throw-money-at-the-problem-of-medical-science/ - Help crowdfund a randomized controlled trial. A promising Sepsis treatment needs a RCT but the method is very cheap and unpatentable. So there is no financial incentive for a company to fund the study.

https://randomcriticalanalysis.wordpress.com/2017/05/09/towards-a-general-factor-of-consumption/ - Factor Analysis leads to a general factor of consumption. Discussion of the data and analysis of the model. Very thorough.

https://randomcriticalanalysis.wordpress.com/2017/04/13/disposable-income-also-explains-us-health-expenditures-quite-well/ - Long Article, lots of graphs. "I argued consumption, specifically Actual Individual Consumption, is an exceptionally strong predictor of national health expenditures (NHE) and largely explains high US health expenditures.  I found AIC to be a much more robust predictor of NHE than GDP... I think it useful to also demonstrate these patterns as it relates to household disposable income"

https://randomcriticalanalysis.wordpress.com/2017/04/15/some-useful-data-on-the-dispersion-characteristics-of-us-health-expenditures/ - US Health spending is highly concentrated in a small fraction of the population. Is this true for other countries?

https://randomcriticalanalysis.wordpress.com/2017/04/17/on-popular-health-utilization-metrics/ - An extremely graph dense article responding to a widely cited paper claiming that "high utilization cannot explain high US health expenditures."

https://randomcriticalanalysis.wordpress.com/2017/04/28/health-consumption-and-household-disposable-income-outside-of-the-oecd/ - Another part in the series on healthcare expenses. Extending the analysis to non-OECD countries. Lots of graphs.

https://randomcriticalanalysis.wordpress.com/2017/05/09/towards-a-general-factor-of-consumption/ - Factor Analysis leads to a general factor of consumption. Discussion of the data and analysis of the model. Very thorough.

https://srconstantin.wordpress.com/2017/04/12/parenting-and-heritability-overview/ - Detailed literature review on heritability and what parenting can affect. A significant number of references are included.

https://nintil.com/2017/04/23/links-7/ - Psychology, Economics, Philosophy, AI

http://lesswrong.com/r/discussion/lw/ox8/unstaging_developmental_psychology/ - A mathematical model of stages of psychological development. The linked technical paper is very impressive. Starting from an abstract theory the authors managed to create a psychological theory that was concrete enough to apply in practice.

**Math and CS**

http://andrewgelman.com/2017/05/10/everybody-lies-seth-stevens-davidowitz/ - A fairly positive review of Seth's book on learning from data.

http://eli.thegreenplace.net/2017/adventures-in-jit-compilation-part-4-in-python/ - Writing a JIT compiler in Python. Discusses both using native python code and the PeachPy library. Performance consideration are explicitly not discussed.

http://eli.thegreenplace.net/2017/book-review-essentials-of-programming-languages-by-d-friedman-and-m-wand/ - Short review. "This book is a detailed overview of some fundamental ideas in the design of programming languages. It teaches by presenting toy languages that demonstrate these ideas, with a full interpreter for every language"

http://eli.thegreenplace.net/2017/adventures-in-jit-compilation-part-3-llvm/ - LLVM can dramatically speed up straightforward source code.

http://www.scottaaronson.com/blog/?p=3221 - Machine Learning, Quantum Mechanics, Google Calendar

**Politics and Economics**

http://noahpinionblog.blogspot.com/2017/04/ricardo-reis-defends-macro_13.html - Macro is defended from a number of common criticisms. A large number of modern papers are cited (including 8 job market papers). Some addressed criticisms include: Macro relies on representative agents, Macro ignores inequality, Macro ignores finance and Macro ignores data and focuses mainly on theory.

http://econlog.econlib.org/archives/2017/04/economic_system.html - What are the fundamental questions an economic system must answer?

http://andrewgelman.com/2017/04/18/reputational-incentives-post-publication-review-two-partial-solutions-misinformation-problem/ - Gelman gives a list of important erroneous analysis in the news and scientific journals. He then considers if negative reputational incentives or post-publication peer review will solve the problem.

https://srconstantin.wordpress.com/2017/05/09/how-much-work-is-real/ - What fraction of jobs are genuinely productive?

https://hivewired.wordpress.com/2017/05/06/yes-this-is-a-hill-worth-dying-on/ - The Nazis were human too. Even if a hill is worth dying on its probably not worth killing for. Discussion of good universal norms. [Culture War]

https://srconstantin.wordpress.com/2017/05/09/chronic-fatigue-syndrome/ - Literature Analysis on Chronic Fatigue Syndrome. Extremely thorough.

https://www.gwern.net/newsletter/2017/04 - A Month's worth of links. Ai, Recent evolution, heritability and other topics.

https://thingofthings.wordpress.com/2017/05/05/the-cluster-structure-of-genderspace/ - For many traits the bell curves for men and women are quite close. Visualizations of Cohen's D. Discussion of trans specific medical interventions.

https://www.jefftk.com/p/replace-infrastructure-wholesale - Can you just dig up a city and replace all the infrastructure in a week?

https://thingofthings.wordpress.com/2017/04/19/deradicalizing-the-romanceless/ - Ozy discusses the problem of (male) involuntarily celibacy.

http://noahpinionblog.blogspot.com/2017/04/the-siren-song-of-homogeneity.html - The alt-right is about racial homogeneity. Smith Reviews the data studying whether a homogeneous society increases trust and social capital. Smith discusses the Japanese culture and his time in Japan. Smith considers the arbitrariness of racial categories despite admitting that race has a biological reality. Smith flips around some alt right slogans. [Extreme high quality engagement with opposing ideas. Culture War]

https://thezvi.wordpress.com/2017/04/16/united-we-blame/ - A list of articles about United, Zvi's thoughts on United, general ideas about airlines.

http://noahpinionblog.blogspot.com/2017/04/why-101-model-doesnt-work-for-labor.html - Noah Smith gives many reasons why the simple supply/demand model can't work for labor economics.

https://thingofthings.wordpress.com/2017/04/14/concerning-archive-of-our-own/ - Ozy defends the moderation policy of the fanction archive A03. [Culture War]

https://thingofthings.wordpress.com/2017/04/13/fantasies-are-okay/ - When are fantasies ok? What about sexual fantasies? [Culture War]

https://srconstantin.wordpress.com/2017/04/25/on-drama/ - Ritual, The Psychology of Adolf Hitler, the dangerous emotion of High Drama, The Rite of Spring.

https://qualiacomputing.com/2017/04/26/psychedelic-science-2017-take-aways-impressions-and-whats-next/ - Notes on the 2017 Psychedelic Science conference.

**Amusement**

http://kajsotala.fi/2017/04/fixing-the-4x-end-game-boringness-by-simulating-legibility/ - "4X games (e.g. Civilization, Master of Orion) have a well-known problem where, once you get sufficiently far ahead, you’ve basically already won and the game stops being very interesting."

https://putanumonit.com/2017/05/12/dark-fiction/ - Jacob does some Kabbalahistic Analysis on the Story of Jacob, Unsong Style.

https://protokol2020.wordpress.com/2017/04/30/several-big-numbers-to-sort/ - 12 Amusing definitions of big numbers.

http://existentialcomics.com/comic/183 - The Life of Francis

http://existentialcomics.com/comic/181 - A Presocratic Get Together.

https://protokol2020.wordpress.com/2017/05/07/problem-with-perspective/ - A 3D geometry problem.

http://existentialcomics.com/comic/184 - Wittgenstein in the Great War(edited)

http://existentialcomics.com/comic/182 - Captain Metaphysics and the Postmodern Peril

**Adjacent**

https://medium.com/@freddiedeboer/conservatives-are-wrong-about-everything-except-predicting-their-own-place-in-the-culture-e5c036fdcdc5 - Conservatives correctly predicted the effects of gay acceptance and no fault divorce. They have also been proven correct about liberal bias in academia and the media. [Culture War]

https://medium.com/@freddiedeboer/franchises-that-are-appropriate-for-children-are-inherently-limited-in-scope-8170e76a16e2 - Superhero movies have an intended audience that includes children. This drastically limits what techniques they can use and themes they can explore. Freddie goes into the details.

https://fredrikdeboer.com/2017/05/11/study-of-the-week-rebutting-academically-adrift-with-its-own-mechanism/ - Freddie wrote his dissertation on the College Learning Assessment, the primary source in "Academically Adrift".

https://medium.com/@freddiedeboer/politics-as-politics-12ab43429e64 - Politics as “group affiliation” vs politics as politics. Annoying atheists aren’t as bad as fundamentalist Christians even if more annoying atheists exist in educated leftwing spaces. Freddie’s clash with the identitarian left despite huge agreement on the object level. Freddie is a socialist not a liberal. [Culture War]

https://www.ribbonfarm.com/2017/05/09/priest-guru-nerd-king/ - Facebook, Governance, Doctrine, Strategy, Tactics and Operations. Fairly short post for Ribbonfarm.

https://fredrikdeboer.com/2017/05/09/lets-take-a-deep-dive-into-that-times-article-on-school-choice/ - A critique of the problems in the Time's well cited article on school choice. Points out issues with selection bias, lack of theory and the fact that "not everyone can be average".

https://fredrikdeboer.com/2017/05/09/lets-take-a-deep-dive-into-that-times-article-on-school-choice/ - A critique of the problems in the Time's well cited article on school choice. Points out issues with selection bias, lack of theory and the fact that "not everyone can be average".

http://marginalrevolution.com/marginalrevolution/2017/05/conversation-garry-kasparov.html - "We talked about AI, his new book Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, why he has become more optimistic, how education will have to adjust to smart software, Russian history and Putin, his favorites in Russian and American literature, Tarkovsky..."

http://econlog.econlib.org/archives/2017/04/iq_with_conscie.html - "My fellow IQ realists are, on average, a scary bunch.  People who vocally defend the power of IQ are vastly more likely than normal people to advocate extreme human rights violations." There are interesting comments here: https://redd.it/6697sh .

http://econlog.econlib.org/archives/2017/04/iq_with_conscie_1.html - Short follow up to the above article.(edited)

http://marginalrevolution.com/marginalrevolution/2017/04/what-would-people-do-if-they-had-superpowers.html - Link to a paper showing 94% of people said they would use superpowers selfishly.

http://waitbutwhy.com/2017/04/neuralink.html - Elon Musk Wants to Build a wizard hat for the brain. Lots of details on the science behind Neuralink.

http://marginalrevolution.com/marginalrevolution/2017/04/dont-people-care-economic-inequality.html - Most Americans don’t mind inequality nearly as much as pundits and academics suggest.

http://marginalrevolution.com/marginalrevolution/2017/04/two-rationality-tests.html - What would you ask to determine if someone is rational? What would Tyler ask?(edited)

http://tim.blog/2017/05/04/exploring-smart-drugs-fasting-and-fat-loss-dr-rhonda-patrick/ - “Avoiding all stress isn’t the answer to fighting aging; it’s about building resiliency to environmental stress.”

http://wakingup.libsyn.com/what-should-we-eat - "Sam Harris speaks with Gary Taubes about his career as a science journalist, the difficulty of studying nutrition and public health scientifically, the growing epidemics of obesity and diabetes, the role of hormones in weight gain, the controversies surrounding his work, and other topics."(edited)

http://www.econtalk.org/archives/2017/05/jennifer_pahlka.html - Code for America. Bringing technology into the government sector.

http://heterodoxacademy.org/resources/viewpoint-diversity-experience/ - A six step process to appreciating viewpoint diversity. I am not sure this site will be the most useful to rationalists , on the object level, but its interesting to see what Haidt came up with.

http://www.econtalk.org/archives/2017/04/elizabeth_pape.html - Elizabeth Pape on Manufacturing and Selling Women's Clothing and Elizabeth Suzann(edited)

http://www.mrmoneymustache.com/2017/04/25/there-are-no-guarantees/ - Avoid Contracts. Don't work another year "just in case".

http://marginalrevolution.com/marginalrevolution/2017/04/saturday-assorted-links-109.html - Assorted Links on politics, Derrida, Shaolin Monks.

http://econlog.econlib.org/archives/2017/04/earth_20.html - Bryan Caplan was a guest on freakanomics Radio. The topic was  "Earth 2.0: Is Income Inequality Inevitable?".

https://www.ribbonfarm.com/2017/04/18/entrepreneurship-is-metaphysical-labor/ - Metaphysics as Intellectual Ergonomics. Entrepreneurship is Applied Metaphysics.

https://www.ribbonfarm.com/2017/04/13/idiots-scaring-themselves-in-the-dark/ - Getting Lost. "The uncanny. This is the emotion of eeriness, spookiness, creepiness"

**Podcast**

http://rationallyspeakingpodcast.org/show/rs-182-spencer-greenberg-on-how-online-research-can-be-faste.html - Podcast. Spencer Greenberg on "How online research can be faster, better, and more useful".

https://medium.com/conversations-with-tyler/patrick-collison-stripe-podcast-tyler-cowen-books-3e43cfe42d10 - Patrick Collison, co founder of Stripe, interviews Tyler.

http://tim.blog/2017/04/11/cory-booker/ - Podcast with US Senator Cory Booker. "Street Fights, 10-Day Hunger Strikes, and Creative Problem-Solving"

http://econlog.econlib.org/archives/2017/04/the_undermotiva_1.html - Two Case studies on libertarians who changed their views for bad reasons.(edited)

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/death-sex-and-moneys-anna-sale-on-bringing-empathy-to-politics-50101701 - Interview with the host of the WNYC podcast Death, Sex, and Money.

http://marginalrevolution.com/marginalrevolution/2017/05/econtalk-podcast-russ-roberts-complacent-class.html - "Cowen argues that the United States has become complacent and the result is a loss of dynamism in the economy and in American life, generally. Cowen provides a rich mix of data, speculation, and creativity in support of his claims."

http://tim.blog/2017/04/16/marie-kondo/ - Podcast. "Marie Kondo is a Japanese organizing consultant, author, and entrepreneur."

http://www.econtalk.org/archives/2017/04/rana_foroohar_o.html - Podcast. Rana Foroohar on the Financial Sector and Makers and Takers

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/cal-newport-on-doing-deep-work-and-escaping-social-media-49878016 - Cal Newport on doing Deep Work and escaping social media.

https://www.samharris.org/podcast/item/forbidden-knowledge - Podcast with Charles Murray. Controversy over The Bell Curve, the validity and significance of IQ as a measure of intelligence, the problem of social stratification, the rise of Trump. [culture war](edited)

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/elizabeth-warren-on-what-barack-obama-got-wrong-49949167 - Ezra Klein Podcast with Elizabeth Warren.

http://marginalrevolution.com/marginalrevolution/2017/04/stubborn-attachments-podcast-ft-alphaville.html - Pocast with Tyler Cowen on Stubborn Attachments. "I outline a true and objectively valid case for a free and prosperous society, and consider the importance of economic growth for political philosophy, how and why the political spectrum should be reconfigured, how we should think about existential risk, what is right and wrong in Parfit and Nozick and Singer and effective altruism, how to get around the Arrow Impossibility Theorem, to what extent individual rights can be absolute, how much to discount the future, when redistribution is justified, whether we must be agnostic about the distant future, and most of all why we need to “think big.”"

http://www.themoneyillusion.com/?p=32435 - Notes on three podcasts. Faster RGDP growth, Monetary Policy, Tyler Cowen's philosophical views.(edited)

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/vc-bill-gurley-on-transforming-health-care-50030526 - A conversation about which healthcare systems are possible in the USA and the future of Obamacare.

https://www.currentaffairs.org/2017/05/campus-politics-and-the-administrative-mind - The nature of College Bureaucracy. Focuses on protests and Title 9. [Culture war]

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/cory-booker-returns-live-to-talk-trust-trump-and-basic-incomes-50054271 - "Booker and I dig into America’s crisis of trust. Faith in both political figures and political institutions has plummeted in recent decades, and the product is, among other things, Trump’s presidency. So what does Booker think can be done about it?"

http://www.stitcher.com/podcast/vox/the-ezra-klein-show/e/cal-newport-on-doing-deep-work-and-escaping-social-media-49878016 - Cal Newport on doing Deep Work and escaping social media.

http://tim.blog/2017/04/22/dorian-yates/ - Bodybuilding Champion. High Intensity Training, Injury Prevention, and Building Maximum Muscle.

Soft Skills for Running Meetups for Beginners

17 Raemon 06 May 2017 04:04PM

Having a vibrant, local Less Wrong or EA community is really valuable, but at least in my experience, it tends to end up falling to the same couple people, and if one of or all of those people get burned out or get interested in another project, the meetup can fall apart. Meanwhile, there are often people who are interested in contributing but feel intimidated by the idea.

In the words of Zvi: You're good enough, you're smart enough and people would like you. (Or more explicitly, "assume you are one level higher than you think you are.")

This is an email I sent to the local NYC mailing list trying to break down some of the soft-skills and rules-of-thumb that I'd acquired over the years, to make running a meetup less intimidating. I acquired these gradually over several years. You don't need all the skills / concepts at once to run a meetup but having at least some of them will help a lot.

These are arranged roughly in order of "How much public speaking or 'interact with strangers' skill they require."

Look for opportunities to help in any fashion 

First, if public speaking stuff is intimidating, there're many things you can do that don't require much at all. Some examples:

  • Sending a email reminder to the group for people to pick a meetup topic each week (otherwise people may forget until the last minute)
  • Bringing food, or interesting toys to add some fun things to do before or after the official meetup starts.
  • Helping out with tech (i.e. setting up projectors, printing out things in advance that need printing out in advance)
  • Take notes during interesting discussions (i.e. start up a google doc, announce that you're taking notes so people can specify if particular things should be off the record, and then post that google doc to whatever mailing list or internet group your community uses to organize)
Running Game Nights

If you're not comfortable giving a presentation or facilitating a conversation (two of the most common types of meetups), a fairly simple meetup is simply to run a game night. Find an interesting board game or card game, pitch it to the group, see if people are interested. (I recommend asking for a firm RSVP for this one, to make sure you have enough people)

Having an explicit activity can take some of the edge off of "talking in public."


Giving a short (3.5 minute) lightning talk

Sometimes, a simple meet-and-greet meetup with freeform socializing is fine. These can get a bit boring if they're they only thing you do - hanging out an d talking is often the primary goal but it's useful to have a reason to come out this particular week. 

A short lightning talk about the beginning of a meetup can spark interesting conversation. A meetup description like "So and so will be giving a short talk on X, followed by freeform discussion" can go a long way. I've heard reports that even a thoroughly mediocre lightning talk can still add a lot of value over "generic meet and greet."

(note: experimentation has apparently revealed that 3.5 minute talks are far superior to 5 minute talks, which tend to end up floundering slightly)


Facilitate OTHER people giving lightning talks

Don't have things to say yourself at all? That's okay! One of the surprisingly simple and effective forms of meetups is a mini-unconference where people just share thoughts on a particular topic, followed by some questions and discussions.

In this case, the main skill you need to acquire is the skill of "being able to cut people off when they've talked too much." 


Build the skill of Presence/Charisma/Public Speaking

Plenty of people have written about this. I recommend The Charisma Myth. I also recommend, if you've never tried it, doing the sort of exercise where you go up to random people on the street and try to talk to them. (Don't try to talk to them too much if they're not interested, but don't stress out about accidentally weirding people out. As long as you don't follow people around or talk to people in a place where they're naturally trapped with you, like a subway car, you'll be fine)

The goal is just exposure therapy for "OMG I'm talking to a person and don't know what to say", until it no longer feels scary. If the very idea of thinking about that fills you with anxiety, you can start with extremely tame goals like "smile at one stranger on your way home".

I did this for several years, sometimes in a "learn to talk to girls" sense and sometimes in a general "talk to strangers" sense, and it was really valuable.

Once you're comfortable talking at all, start paying attention to higher level stuff like "don't talk too fast, make eye contact, etc."


Think about things until you have some interesting ideas worth chatting about 

Maybe a formal presentation is scary, but a low-pressure conversation feels doable. A perfectly good meetup is "let's talk about this interesting idea I've been thinking about." You can think through ideas on your commute to work, lunch break etc, so that when it comes time to talk about it, you can just treat it like a normal conversation. (This may be similar to giving a lightning talk followed by freeform discussion, but with a bit less of an explicit talk at the beginning and a bit more structure to the discussion afterwards)

The trick is to not just think about interesting concepts, but to:


Help other people talk 

Unless you are giving a formal presentation, your job as a meetup facilitator isn't actually to talk at people, it's to get them to talk to each other. It's useful for you to have ideas, but mostly insofar as those ideas prompt interesting questions that you can ask other people, giving them the opportunity to think or to share their experiences.

 - you will probably need to "prime the pump" of discussion, starting with an explanation for why the idea seems important to think about in the first place, building up some excitement for the idea and giving people a chance to mull it over.

 - if you see someone who looks like they've been thinking, and maybe want to talk but are shy, explicitly say "hey, looks like you maybe had an idea - did you have anything you wanted to share?" (don't put too much pressure on them if the answer is "no not really.")

 - if someone is going on too long and you notice your attention or anyone else's face start to wander...


Knowing when to interrupt 

In general, try *not* to interrupt other people, but it will sometimes  be necessary if people are getting off track, or if one person's been going on too long. Doing this well is a skill I don't even know how to do, but I think it's better to be able to do it at all. Some possibilities:

 - "Hey, sorry to interrupt but this sounds like a tangent, maybe we can come back to this later during the followup conversation?"

 - "Hey, just wanted to make sure some others got a chance to share their thoughts."


Have an Agenda

Sometimes you run out of things to say, and then aren't sure what to do next. No matter what form the meetup takes, have a series of items planned out so that if things start to flounder, you can say "already, let's see what's next on the agenda", and then just abruptly move on to that.

If you're doing a presentation, this can be a series of things you want to remember to get to. If you're teaching a skill, it can be a few different exercises relating to the skill. If you're facilitating a discussion, a series of questions to ask.


Welcome Newcomers

They made nontrivial effort just to come out. They're hoping to find something interesting here. Talk to them during the "casual conversation pre-meetup" and try to get a sense of why they came, and if possible tailor the meetup make sure those desires get met. If they aren't getting a chance to talk, make sure to direct the conversation to them at least once.


Not Giving a Fuck 

The first year that I ran meetups, I found it very stressful, worried a lot about whether there was a meetup each week and whether it was good. Taking primary-responsibility for that caused it to take up a semi-permanent slot in my working memory (or at least subconscious mind), constantly running and worrying.

Then I had a year where I was just like "meh, screw it, I don't care", didn't run meetups much at all. 

Then I came back and approached it from a "I just want to have meetups as often as I can, do as good a job as I can, and if it ends up just being a somewhat awkward hangout, whatever it'll be fine." This helped tremendously.

I don't know if it's possible to skip to that part (probably not). But it's the end-goal.


More Specifically: Be Okay if People Don't Show Up

Sometimes you'll have a cool idea and you'll post it and... 1-2 people actually come. This can feel really bad. It is a thing that happens though, and it's okay, and learning how to cope with this is a key part of growing as an organizer. You should take note of when this happens and probably not do the exact sort of thing again, but it doesn't mean people don't like you, it means they either weren't interested in that particular topic or just happened to be busy that day.

(Some people may not trust me if I don't acknowledge it's at least possible that people actually just don't like you. It is. But I think it is way more likely that you didn't pitch the idea well, or build enough excitement beforehand, or that this particular idea just didn't work)

If you have an idea that is only worth doing if some critical mass of people attend, I recommend putting an explicit request "I will only do this meetup if X people enthusiastically say 'I really want to come to this and will make sure to attend.'"

It may be helpful to visualize in advance how you'll respond if 20+ people come and how you'll respond if 1-2 people come. (With the latter, aiming to have more personalized conversations rather than an "event" in the same fashion)

Building Excitement

Sometimes, people naturally think an idea is cool. A lot of the time, though, especially for weird/novel ideas, you will have to make them excited. Almost all of my offbeat ideas have required me to rally people, email them individually to check if they were coming, and talk about it in person a few times to get my own excitement to spread infectiously.

(For frame of reference, when I first pitched Solstice to the group, they were like "...really? Okay, I guess." And then I kept talking about it excitedly each week, sharing pieces of songs after the end of the formal meetup, demonstrating that I cared enough to put in a lot of work. I did similar things with the Hufflepuff Unconference)

This is especially important if you'll be putting a lot of effort into an experiment and you want to make sure it succeeds.

Step 1 - Be excited yourself. Find the kernel of an idea that seems high potential, even if it's hard to explain.

Step 2 - Put in a lot of work making sure you understand your idea and have things to say/do with it.

Step 3 - Share pieces of it in the aftermath of a previous meetup to see if people respond to it. If they don't respond at all, you may need to drop it. If at least 1 or 2 people respond with interest you can probably make it work but:

Step 4 - Email people individually. If you're comfortable enough with some people at the meetup, send them a private messaging saying "hey, would you be interested in this thing?" (People respond way more reliably to private messages than to generic "hey guys what do you think" to the group?)

Step 5 - If people are waffling on whether the idea is exciting enough to come, say straightforwardly: I will do this if and only if X people respond enthusiastically about it. (And then if they don't, alas, let the event go)

Further Reading

I wrote this out, and then remembered that Kaj Sotala has written a really comprehensive guide to running meetups (37 pages long). If you want a lot more ideas and advice, I recommend checking it out.


There is No Akrasia

17 lifelonglearner 30 April 2017 03:33PM

I don’t think akrasia exists.


This is a fairly strong claim. I’m also not going to try and argue it.

 

What I’m really here to argue are the two weaker claims that:


a) Akrasia is often treated as a “thing” by people in the rationality community, and this can lead to problems, even though akrasia a sorta-coherent concept.


b) If we want to move forward and solve the problems that fall under the akrasia-umbrella, it’s better to taboo the term akrasia altogether and instead employ a more reductionist approach that favors specificity


But that’s a lot less catchy, and I think we can 80/20 it with the statement that “akrasia doesn’t exist”, hence the title and the opening sentence.


First off, I do think that akrasia is a term that resonates with a lot of people. When I’ve described this concept to friends (n = 3), they’ve all had varying degrees of reactions along the lines of “Aha! This term perfectly encapsulates something I feel!” On LW, it seems to have garnered acceptance as a concept, evidenced by the posts / wiki on it.


It does seem, then, that this concept of “want-want vs want” or “being unable to do what you ‘want’ to do” seems to point at a phenomenologically real group of things in the world.


However, I think that this is actually bad.


Once people learn the term akrasia and what it represents, they can now pattern-match it to their own associated experiences. I think that, once you’ve reified akrasia, i.e. turned it into a “thing” inside your ontology, problems occur:


First off, treating akrasia as a real thing gives it additional weight and power over you:


Once you start to notice the patterns, it’s harder to see things again as mere apparent chaos. In the case of akrasia, I think this means that people may try less hard because they suddenly realize they’re in the grip of this terrible monster called akrasia.


I think this sort of worldview ends up reinforcing some unhelpful attitudes towards solving the problems akrasia represents. As an example, here are two paraphrased things I’ve overheard about akrasia which I think illustrate this. (Happy to remove these if you would prefer not to be mentioned.)


“Akrasia has mutant healing powers…Thus you can’t fight it, you can only keep switching tactics for a time until they stop working…”


“I have massive akrasia…so if you could just give me some more high-powered tools to defeat it, that’d be great…”  

 

Both of these quotes seem to have taken the akrasia hypothesis a little too far. As I’ll later argue, “akrasia” seems to be dealt with better when you see the problem as a collection of more isolated disparate failures of different parts of your ability to get things done, rather than as an umbrella term.


I think that the current akrasia framing actually makes the problem more intractable.


I see potential failure modes where people come into the community, hear about akrasia (and all the related scary stories of how hard it is to defeat), and end up using it as an excuse (perhaps not an explicit belief, but as an alief) that impacts their ability to do work.


This was certainly the case for me, where improved introspection and metacognition on certain patterns in my mental behaviors actually removed a lot of my willpower which had served me well in the past. I may be getting slightly tangential here, but my point is that giving people models, useful as they might be for things like classification, may not always be net-positive.


Having new things in your ontology can harm you.


So just giving people some of these patterns and saying, “Hey, all these pieces represent a Thing called akrasia that’s hard to defeat,” doesn’t seem like the best idea.


How can we make the akrasia problem more tractable, then?


I claimed earlier that akrasia does seem to be a real thing, as it seems to be relatable to many people. I think this may actually because akrasia maps onto too many things. It’s an umbrella term for lots of different problems in motivation and efficacy that could be quite disparate problems. The typical akrasia framing lumps problems like temporal discounting with motivation problems like internal disagreements or ugh fields, and more.

 

Those are all very different problems with very different-looking solutions!


In the above quotes about akrasia, I think that they’re an example of having mixed up the class with its members. Instead of treating akrasia as an abstraction that unifies a class of self-imposed problems that share the property of acting as obstacles towards our goals, we treat it as a problem onto itself.


Saying you want to “solve akrasia” makes about as much sense as directly asking for ways to “solve cognitive bias”. Clearly, cognitive biases are merely a class for a wide range of errors our brains make in our thinking. The exercises you’d go through to solve overconfidence look very different than the ones you might use to solve scope neglect, for example.


Under this framing, I think we can be less surprised when there is no direct solution to fighting akrasia—because there isn’t one.


I think the solution here is to be specific about the problem you are currently facing. It’s easy to just say you “have akrasia” and feel the smooth comfort of a catch-all term that doesn’t provide much in the way of insight. It’s another thing to go deep into your ugly problem and actually, honestly say what the problem is.


The important thing here is to identify which subset of the huge akrasia-umbrella your individual problem falls under and try to solve that specific thing instead of throwing generalized “anti-akrasia” weapons at it.


Is your problem one of remembering to do tasks? Then set up a Getting Things Done system.


Is your problem one of hyperbolic discounting, of favoring short-term gains? Then figure out a way to recalibrate the way you weigh outcomes. Maybe look into precommitting to certain courses of action.


Is your problem one of insufficient motivation to pursue things in the first place? Then look into why you care in the first place. If it turns out you really don’t care, then don’t worry about it. Else, find ways to source more motivation.


The basic (and obvious) technique I propose, then, looks like:


  1. Identify the akratic thing.

  2. Figure out what’s happening when this thing happens. Break it down into moving parts and how you’re reacting to the situation.

  3. Think of ways to solve those individual parts.

  4. Try solving them. See what happens

  5. Iterate


Potential questions to be asking yourself throughout this process:

  • What is causing your problem? (EX: Do you have the desire but just aren’t remembering? Are you lacking motivation?)

  • How does this akratic problem feel? (EX: What parts of yourself is your current approach doing a good job of satisfying? Which parts are not being satisfied?)

  • Is this really a problem? (EX: Do you actually want to do better? How realistic would it be to see the improvements you’re expecting? How much better do you think could be doing?)


Here’s an example of a reductionist approach I did:


“I suffer from akrasia.


More specifically, though, I suffer from a problem where I end up not actually having planned things out in advance. This leads me to do things like browse the internet without having a concrete plan of what I’d like to do next. In some ways, this feels good because I actually like having the novelty of a little unpredictability in life.


However, at the end of the day when I’m looking back at what I’ve done, I have a lot of regret over having not taken key opportunities to actually act on my goals. So it looks like I do care (or meta-care) about the things I do everyday, but, in the moment, it can be hard to remember.”


Now that I’ve far more clearly laid out the problem above, it seems easier to see that the problem I need to deal with is a combination of:

  • Reminding myself the stuff I would like to do (maybe via a schedule or to-do list).

  • Finding a way to shift my in-the-moment preferences a little more towards the things I’ve laid out (perhaps with a break that allows for some meditation).


I think that once you apply a reductionist viewpoint and specifically say exactly what it is that is causing your problems, the problem is already half-solved. (Having well-specified problems seems to be half the battle.)

 

Remember, there is no akrasia! There are only problems that have yet to be unpacked and solved!


[Link] S-risks: Why they are the worst existential risks, and how to prevent them

16 Kaj_Sotala 20 June 2017 12:34PM

Notes from the Hufflepuff Unconference (Part 1)

16 Raemon 23 May 2017 09:04PM

April 28th, we ran the Hufflepuff Unconference in Berkeley, at the MIRI/CFAR office common space.

There's room for improvement in how the Unconference could have been run, but it succeeded the core things I wanted to accomplish: 

 - Established common knowledge of what problems people were actually interested in working on
 - We had several extensive discussions of some of those problems, with an eye towards building solutions
 - Several people agreed to work together towards concrete plans and experiments to make the community more friendly, as well as build skills relevant to community growth. (With deadlines and one person acting as project manager to make sure real progress was made)
 - We agreed to have a followup unconference in roughly three months, to discuss how those plans and experiments were going

Rough notes are available here. (Thanks to Miranda, Maia and Holden for takin really thorough notes)

This post will summarize some of the key takeaways, some speeches that were given, and my retrospective thoughts on how to approach things going forward.

But first, I'd like to cover a question that a lot of people have been asking about:

What does this all mean for people outside of the Bay?

The answer depends.

I'd personally like it if the overall rationality community got better at social skills, empathy, and working together, sticking with things that need sticking with (and in general, better at recognizing skills other than metacognition). In practice, individual communities can only change in the ways the people involved actually want to change, and there are other skills worth gaining that may be more important depending on your circumstances.

Does Project Hufflepuff make sense for your community?

If you're worried that your community doesn't have an interest in any of these things, my actual honest answer is that doing something "Project Hufflepuff-esque" probably does not make sense. I did not choose to do this because I thought it was the single-most-important thing in the abstract. I did it because it seemed important and I knew of a critical mass of people who I expected to want to work on it. 

If you're living in a sparsely populated area or haven't put a community together, the first steps do not look like this, they look more like putting yourself out there, posting a meetup on Less Wrong and just *trying things*, any things, to get something moving.

If you have enough of a community to step back and take stock of what kind of community you want and how to strategically get there, I think this sort of project can be worth learning from. Maybe you'll decide to tackle something Project-Hufflepuff-like, maybe you'll find something else to focus on. I think the most important thing is have some kind of vision for something you community can do that is worth working together, leveling up to accomplish.

Community Unconferences as One Possible Tool

Community unconferences are a useful tool to get everyone on the same page and spur them on to start working on projects, and you might consider doing something similar. 

They may not be the right tool for you and your group - I think they're most useful in places where there's enough people in your community that they don't all know each other, but do have enough existing trust to get together and brainstorm ideas. 

If you have a sense that Project Hufflepuff is worthwhile for your community but the above disclaimers point towards my current approach not making sense for you, I'm interested in talking about it with you, but the conversation will look less like "Ray has ideas for you to try" and more like "Ray is interested in helping you figure out what ideas to try, and the solution will probably look very different."

Online Spaces

Since I'm actually very uncertain about a lot of this and see it as an experiment, I don't think it makes sense to push for any of the ideas here to directly change Less Wrong itself (at least, yet). But I do think a lot of these concepts translate to online spaces in some fashion, and I think it'd make sense to try out some concepts inspired by this in various smaller online subcommunities.

Table of Contents:

I. Introduction Speech

 - Why are we here?
 - The Mission: Something To Protect
 - The Invisible Badger, or "What The Hell Is a Hufflepuff?"
 - Meta Meetups Usually Suck. Let's Try Not To.

II. Common Knowledge

 - What Do People Actually Want?
 - Lightning Talks

III. Discussing the Problem (Four breakout sessions)

 - Welcoming Newcomers
 - How to handle people who impose costs on others?
 - Styles of Leadership and Running Events
 - Making Helping Fun (or at least lower barrier-to-entry)

IV. Planning Solutions and Next Actions

V. Final Words

I. Introduction: It Takes A Village to Save a World

(A more polished version of my opening speech from the unconference)

[Epistemic Status: This is largely based on intuition, looking at what our community has done and what other communities seem to be able to do. I'm maybe 85% confident in it, but it is my best guess]

In 2012, I got super into the rationality community in New York. I was surrounded by people passionate about thinking better and using that thinking to tackle ambitious projects. And in 2012 we all decided to take on really hard projects that were pretty likely to fail, because the expected value seemed high, and it seemed like even if we failed we'd learn a lot in the process and grow stronger.

That happened - we learned and grew. We became adults together, founding companies and nonprofits and creating holidays from scratch.

But two years later, our projects were either actively failing, or burning us out. Many of us became depressed and demoralized.

There was nobody who was okay enough to actually provide anyone emotional support. Our core community withered.

I ended up making that the dominant theme of the 2014 NYC Solstice, with a call-to-action to get back to basics and take care each other.

I also went to the Berkeley Solstice that year. And... I dunno. In the back of my mind I was assuming "Berkeley won't have that problem - the Bay area has so many people, I can't even imagine how awesome and thriving a community they must have." (Especially since the Bay kept stealing all the Movers and Shakers of NYC).

The theme of the Bay Solstice turned out to be "Hey guys, so people keep coming to the Bay, running on a dream and a promise of community, but that community is not actually there, there's a tiny number of well-connected people who everyone is trying to get time with, and everyone seems lonely and sad. And we don't even know what to do about this."

In 2015, that theme in the Berkeley Solstice was revisited.

So I think that was the initial seed of what would become Project Hufflepuff - noticing that it's not enough to take on cool projects, that it's not enough to just get a bunch of people together and call it a community. Community is something you actively tend to. Insofar as Maslow's hierarchy is real, it's a foundation you need before ambitious projects can be sustainable.

There are other pieces of the puzzle - different lenses that, I believe, point towards a Central Thing. Some examples:

Group houses, individualism and coordination.

I've seen several group houses where, when people decide it no longer makes sense to live in the house, they... just kinda leave. Even if they've literally signed a lease. And everyone involved (the person leaving and those remain), instinctively act as if it's the remaining people's job to fill the leaver's spot, to make rent.

And the first time, this is kind of okay. But then each subsequent person leaving adds to a stressful undertone of "OMG are we even going to be able to afford to live here?". It eventually becomes depressing, and snowballs into a pit that makes newcomers feel like they don't WANT to move into the house.

Nowadays I've seen some people explicitly building into the roommate agreement a clear expectation of how long you stay and who's responsibility it is to find new roommates and pay rent in the meantime. But it's disappointing to me that this is something we needed, that we weren't instinctively paying to attention to how we were imposing costs on each other in the first place. That when we *violated a written contract*, let alone a handshake agreement, that we did not take upon ourselves (or hold each other accountable), to ensure we could fill our end of the bargain.

Friends, and Networking your way to the center

This community puts pressure on people to improve. It's easier to improve when you're surrounded by ambitious people who help or inspire each other level up. There's a sense that there's some cluster of cool-people-who-are-ambitious-and-smart who've been here for a while, and... it seems like everyone is trying to be friends with those people. 

It also seems like people just don't quite get that friendship is a skill, that adult friendships in City Culture can be hard, and it can require special effort to make them happen.

I'm not entirely sure what's going on here - it doesn't make sense to say anyone's obligated to hang out with any particular person (or obligated NOT to), but if 300 people aren't getting the connection they want it seems like *somewhere people are making a systematic mistake.* 

(Since the Unconference, Maia has tackled this particular issue in more detail)

 

The Mission - Something To Protect

 

As I see it, the Rationality Community has three things going on: Truth. Impact. And "Being People".

In some sense, our core focus is the practice of truthseeking. The thing that makes that truthseeking feel *important* is that it's connected to broader goals of impacting the world. And the thing that makes this actually fun and rewarding enough to stick with is a community that meets our needs, where can both flourish as individuals and find the relationships we want.

I think we have made major strides in each of those areas over the past seven years. But we are nowhere near done.

Different people have different intuitions of which of the three are most important. Some see some of them as instrumental, or terminal. There are people for whom Truthseeking is *the point*, and they'd have been doing that even if there wasn't a community to help them with it, and there are people for whom it's just one tool of many that helps them live their life better or plan important projects.

I've observed a tendency to argue about which of these things is most important, or what tradeoffs are worth making. Inclusiveness verses high standards. Truth vs action. Personal happiness vs high acheivement.

I think that kind of argument is a mistake.

We are falling woefully short on all of these things. 

We need something like 10x our current capacity for seeing, and thinking. 10x our capacity for doing. 10x our capacity for *being healthy people together.*

I say "10x" not because all these things are intrinsically equal. The point is not to make a politically neutral push to make all the things sound nice. I have no idea exactly how far short we're falling on each of these because the targets are so far away I can't even see the end, and we are doing a complicated thing that doesn't have clear instructions and might not even be possible.

The point is that all of these are incredibly important, and if we cannot find a way to improve *all* of these, in a way that is *synergistic* with each other, then we will fail.

There is a thing at the center of our community. Not all of us share the exact same perspective on it. For some of us it's not the most important thing. But it's been at the heart of the community since the beginning and I feel comfortable asserting that it is the thing that shapes our culture the most:

The purpose of our community is to make sure this place is okay:

The world isn't okay right now, on a number of levels. And a lot of us believe there is a strong chance it could become dramatically less okay. I've seen people make credible progress on taking responsibility for pieces of our home. But when all is said and done, none of our current projects really give me the confidence that things are going to turn out all right. 

Our community was brought together on a promise, a dream, and we have not yet actually proven ourselves worthy of that dream. And to make that dream a reality we need a lot of things.

We need to be able to criticize, because without criticism, we cannot improve.

If we cannot, I believe we will fail.

We need to be able to talk about ideas that are controversial, or uncomfortable - otherwise our creativity and insight will be crippled.

If we cannot, I believe we will fail.

We need to be able to do those things without alienating people. We need to be able to criticize without making people feel untrusted and discouraged from even taking action. We need to be able to discuss challenging things while earnestly respecting the notion that *talking about ideas gives those ideas power and has concrete effects on social reality*, and sometimes that can hurt people.

If we cannot figure out how to do that, I believe we will fail.

We need more people who are able and willing to try things that have never been done before. To stick with those things long enough to *get good at them*, to see if they can actually work. We need to help each other do impossible things. And we need to remember to check for and do the *possible*, boring, everyday things that are in fact straightforward and simple and not very inspiring. 

If we cannot manage to do that, I believe we will fail.

We need to be able to talk concretely about what the *highest leverage actions in the world are*. We need to prioritize those things, because the world is huge and broken and we are small. I believe we need to help each other through a long journey, building bigger and bigger levers, building connections with people outside our community who are undertaking the same journey through different perspectives.

And in the process, we need to not make it feel like if *you cannot personally work on those highest leverage things, that you are not important.* 

There's the kind of importance where we recognize that some people have scarce skills and drive, and the kind of importance where we remember that *every* person has intrinsic worth, and you owe *nobody* any special skills or prestigious sounding projects for your life to be worthwhile.

This isn't just a philosophical matter - I think it's damaging to our mental health and our collective capacity. 

We need to recognize that the distribution of skills we tend to reward or punish is NOT just about which ones are actually most valuable - sometimes it is simply founder effects and blind spots.

We cannot be a community for everyone - I believe trying to include anyone with a passing interest in us is a fool's errand. But there are many people who had valuable skills to contribute who have turned away, feeling frustrated and un-valued.

If we cannot find a way to accomplish all of these things at once, I believe we will fail.

The thesis of Project Hufflepuff is that it takes (at least) a village to save a world. 

It takes people doing experimental impossible things. It takes caretakers. It takes people helping out with unglorious tasks. It takes technical and emotional and physical skills. And while it does take some people who specialize in each of those things, I think it also needs many people who are least a little bit good at each of them, to pitch in when needed.

Project Hufflepuff is not the only things our community needs, or the most important. But I believe it is one of the necessary things that our community needs, if we're to get to 10x our current Truthseeking, Impact and Human-ing.

If we're to make sure that our home is okay.

The Invisible Badger

"A lone hufflepuff surrounded by slytherins will surely wither as if being leeched dry by vampires."

- Duncan

[Epistemic Status: My evidence for this is largely based on discussions with a few people for whom the badger seems real and valuable, and who report things being different in other communities, as well as some of my general intuitions about society. I'm 75% sure the badger exists, 90% that's it worth leaning into the idea of the badger to see if it works for you, and maybe 55% sure that it's worth trying to see the badger if you can't already make out it's edges.]


 

If I *had* to pick a clear thing that this conference is about without using Harry Potter jargon, I'd say "Interpersonal dynamics surrounding trust, and how those dynamics apply to each of the Impact/Truth/Human focuses of the rationality community."

I'm not super thrilled with that term because I think I'm grasping more for some kind of gestalt. An overall way of seeing and being that's hard to describe and that doesn't come naturally to the sort of person attracted to this community.

Much like the blind folk and the elephant, who each touched a different part of the animal and came away with a different impression (the trunk seems like a snake, the legs seem like a tree), I've been watching several people in the community try to describe things over the past few years. And maybe those things are separate but I feel like they're secretly a part of the same invisible badger.

Hufflepuff is about hard work, and loyalty, and camaraderie. It's about emotional intelligence. It's about seeing value in day to day things that don't directly tie into epic narratives. 

There's a bunch of skills that go into Hufflepuff. And part of want I want is for people to get better at those skills. But It think a mindset, an approach, that is fairly different from the typical rationalist mindset, that makes those skills easier. It's something that's harder when you're being rigorously utilitarian and building models of the world out of game theory and incentives.

Mindspace is deep and wide, and I don't expect that mindset to work for everyone. I don't think everyone should be a Hufflepuff. But I do think it'd be valuable to the community if more people at least had access to this mindset and more of these skills.

So what I'd like, for tonight, is for people to lean into this idea. Maybe in the end you'll find that this doesn't work for you. But I think many people's first instinct is going to be that this is alien and uncomfortable and I think it's worth trying to push past that.

The reason we're doing this conference together is because the Hufflepuff way doesn't really work if people are trying to do it alone - I think it requires trust and camaraderie and persistence to really work. I don't think we can have that required trust all at once, but I think if there are multiple people trying to make it work, who can incrementally trust each other more, I think we can reach a place where things run more smoothly, where we have stronger emotional connections, and where we trust each other enough to take on more ambitious projects than we could if we're all optimizing as individuals.

Meta-Meetups Suck. Let's Not.

This unconference is pretty meta - we're talking about norms and vague community stuff we want to change.

Let me tell you, meta meetups are the worst. Typically you end up going around in circles complaining and wishing there were more things happening and that people were stepping up and maybe if you're lucky you get a wave of enthusiasm that lasts a month or so and a couple things happen but nothing really *changes*.

So. Let's not do that. Here's what I want to accomplish and which seems achievable:

1) Establish common knowledge of important ideas and behavior patterns. 

Sometimes you DON'T need to develop a whole new skill, you just need to notice that your actions are impacting people in a different way, and maybe that's enough for you to decide to change somethings. Or maybe someone has a concept that makes it a lot easier for you to start gaining a new skill on your own.

2) Establish common knowledge of who's interested in trying which new norms, or which new skills. 

We don't actually *know* what the majority of people want here. I can sit here and tell you what *I* think you should want, but ultimately what matters is what things a critical mass of people want to talk about tonight.

Not everyone has to agree that an idea is good to try it out. But there's a lot of skills or norms that only really make sense when a critical mass of other people are trying them. So, maybe of the 40 people here, 25 people are interested in improving their empathy, and maybe another 20 are interested in actively working on friendship skills, or sticking to commitments. Maybe those people can help reinforce each other.

3) Explore ideas for social and skillbuilding experiments we can try, that might help. 

The failure mode of Ravenclaws is to think about things a lot and then not actually get around to doing them. A failure mode of ambitious Ravenclaws, is to think about things a lot and then do them and then assume that because they're smart, that they've thought of everything, and then not listen to feedback when they get things subtly or majorly wrong.

I'd like us to end by thinking of experiments with new norms, or habits we'd like to cultivate. I want us to frame these as experiments, that we try on a smaller scale and maybe promote more if they seem to be working, while keeping in mind that they may not work for everyone.

4) Commit to actions to take.

Since the default action is for them to peter out and fail, I'd like us to spend time bulletproofing them, brainstorming and coming up with trigger-action plans so that they actually have a chance to succeed.

Tabooing "Hufflepuff"

Having said all that talk about The Hufflepuff Way...

...the fact is, much of the reason I've used those towards is to paint a rough picture to attract the sort of person I wanted to attract to this unconference.

It's important that there's a fuzzy, hard-to-define-but-probably-real concept that we're grasping towards, but it's also important not to be talking past each other. Early on in this project I realized that a few people who I thought were on the same page actually meant fairly different things. Some cared more about empathy and friendship. Some cared more about doing things together, and expected deep friendships to arise naturally from that.

So I'd like us to establish a trigger-action-plan right now - for the rest of this unconference, if someone says "Hufflepuff", y'all should say "What do you mean by that?" and then figure out whatever concrete thing you're actually trying to talk about.

II. Common Knowledge

The first part of the unconference was about sharing our current goals, concerns and background knowledge that seemed useful. Most of the specifics are covered in the notes. But I'll talk here about why I included the things I did and what my takeaways were afterwards on how it worked.

Time to Think

The first thing I did was have people sit and think about what they actually wanted to get out of the conference, and what obstacles they could imagine getting in the way of that. I did this because often, I think our culture (ostensibly about helping us think better) doesn't give us time to think, and instead has people were are quick-witted and conversationally dominant end up doing most of the talking. (I wrote a post a year ago about this, the 12 Second Rule). In this case I gave everyone 5 minutes, which is something I've found helpful at small meetups in NYC.

This had mixed results - some people reported that while they can think well by themselves, in a group setting they find it intimidating and their mind starts wandering instead of getting anything done. They found it much more helpful when I eventually let people-who-preferred-to-talk-to-each-other go into another room to talk through their ideas outloud.

I think there's some benefit to both halves of this and I'm not sure how common which set of preferences are. It's certainly true that it's not common for conferences to give people a full 5 minutes to think so I'd expect it to be someone uncomfortable-feeling regardless of whether it was useful.

But an overall outcome of the unconference was that it was somewhat lower energy than I'd wanted, and opening with 5 minutes of silent thinking seemed to contribute to that, so for the next unconference I run, I'm leaning towards a shorter period of time for private thinking (Somewhere between 12 and 60 seconds), followed by "turn to your neighbors and talk through the ideas you have", followed by "each group shares their concepts with the room."

"What is do you want to improve on? What is something you could use help with?"

I wanted people to feel like active participants rather than passive observers, and I didn't want people to just think "it'd be great if other people did X", but to keep an internal locus of control - what can *I* do to steer this community better? I also didn't want people to be thinking entirely individualistically.

I didn't collect feedback on this specific part and am not sure how valuable others found it (if you were at the conference, I'd be interested if you left any thoughts in the comments). Some anonymized things people described:

  • When I make social mistakes, consider it failure; this is unhelpful

  • Help point out what they need help with

  • Have severe akrasia, would like more “get things done” magic tools

  • Getting to know the bay area rationalist community

  • General bitterness/burned out

  • Reduce insecurity/fear around sharing

  • Avoiding spending most words signaling to have read a particular thing; want to communicate more clearly

  • Creating systems that reinforce unnoticed good behaviour

  • Would like to learn how to try at things

  • Find place in rationalist community

  • Staying connected with the group

  • Paying attention to what they want in the moment, in particular when it’s right to not be persistent

  • Would like to know the “landing points” to the community to meet & greet new people

  • Become more approachable, & be more willing to approach others for help; community cohesiveness

  • Have been lonely most of life; want to find a place in a really good healthy community

  • Re: prosocialness, being too low on Maslow’s hierarchy to help others

  • Abundance mindset & not stressing about how to pay rent

  • Cultivate stance of being able to do helpful things (action stance) but also be able to notice difference between laziness and mental health

  • Don’t know how to respect legit safety needs w/o getting overwhelmed by arbitrary preferences; would like to model people better to give them basic respect w/o having to do arbitrary amount of work

  • Starting conversations with new people

  • More rationalist group homes / baugruppe

  • Being able to provide emotional support rather than just logistics help

  • Reaching out to people at all without putting too much pressure on them

  • Cultivate lifelong friendships that aren’t limited to particular time and place

  • Have a block around asking for help bc doesn’t expect to reciprocate; would like to actually just pay people for help w stuff

  • Want to become more involved in the community

  • Learn how to teach other people “ops skills”

  • Connections to people they can teach and who can teach them

Lightning Talks

Lightning talks are a great way to give people an opportunity to not just share ideas, but get some practice at public presentation (which I've found can be a great gateway tool for overall confidence and ability to get things done in the community). Traditionally they are 5 minutes long. CFAR has found that 3.5 minute lightning talks are better than 5 minute talks, because it cuts out some rambling and tangents.

It turned out we had more people than I'd originally planned time for, so we ended up switching to two minute talks. I actually think this was even better, and my plan for next time is do 1-minute timeslots but allow people to sign up for multiple if they think their talk requires it, so people default to giving something short and sweet.

Rough summaries of the lightning talks can be found in the notes.

III. Discussing the Problem

The next section involved two "breakout session" - two 20 minute periods for people to split into smaller groups and talk through problems in detail. This was done in an somewhat impromptu fashion, with people writing down the talks they wanted to do on the whiteboard and then arranging them so most people could go to a discussion that interested them.

The talks were:

 -  Welcoming Newcomers
 -  How to handle people who impose costs on others?
 -  Styles of Leadership and Running Events
 -  Making Helping Fun (or at least lower barrier-to-entry)
 -  Circling session 

There was a suggested discussion about outreach, which I asked to table for a future unconference. My reason was that outreach discussions tend to get extremely meta and seem to be an attractor (people end up focusing on how to bring more people into the community without actually making sure the community is good, and I wanted the unconference to focus on the latter.)

I spent some time drifting between sessions, and was generally impressed both with the practical focus each discussion had, as well as the way they were organically moderated.

Again, more details in the notes.

IV. Planning Solutions and Next Actions

After about an hour of discussion and mingling, we came back to the central common space to describe key highlights from each session, and begin making concrete plans. (Names are crediting people who suggested an idea and who volunteered to make it happen)

Creating Norms for Your Space (Jane Joyce, Tilia Bell)

The "How to handle people who impose costs on other" conversation ended up focusing on minor but repeated costs. One of the hardest things to moderate as an event host is not people who are actively disruptive, but people who just a little bit awkward or annoying - they'd often be happy to change their behavior if they got feedback, but giving feedback feels uncomfortable and it's hard to do in a tactful way. This presents two problems at once: parties/events/social-spaces end up a more awkward/annoying than they need to be, and often what happens is that rather than giving feedback, the hosts stop inviting people doing those minor things, which means a lot of people still working on their social skills end up living in fear of being excluded.

Solving this fully requires a few different things at once, and I'm not sure I have a clear picture of what it looks like, but one stepping stone people came up with was creating explicit norms for a given space, and a practice of reminding people of those norms in a low-key, nonjudgmental way.

I think will require a lot of deliberate effort and practice on the part of hosts to avoid alternate bad outcomes like "the norms get disproportionately enforced on people the hosts like and applied unfairly to people they aren't close with". But I do think it's a step in the right direction to showcase what kind of space you're creating and what the expectations are.

Different spaces can be tailored for different types of people with different needs or goals. (I'll have more to say about this in an upcoming post - doing this right is really hard, I don't actually know of any groups that have done an especially good job of it.)

I *was* impressed with the degree to which everyone in the conversation seemed to be taking into account a lot of different perspectives at once, and looking for solutions that benefited as many people as possible.

Welcoming Committee (Mandy Souza, Tessa Alexanian)

Oftentimes at events you'll see people who are new, or who don't seem comfortable getting involved with the conversation. Many successful communities do a good job of explicitly welcoming those people. Some people at the unconference decided to put together a formal group for making sure this happens more.

The exact details are still under development, but I think the basic idea is to have a network of people who are interested
he idea is to have a group of people who go to different events, playing the role of the welcomer. I think the idea is sort of a "Uber for welcomers" network (i.e. it both provides a place for people running events to go to ask for help with welcoming, and people who are interested in welcoming to find events that need welcomers)

It also included some ideas for better infrastructure, such as reviving "bayrationality.org" to make it easier for newcomers to figure out what events are going on (possibly including links to the codes of conduct for different spaces as well). In the meanwhile, some simple changes were the introduction of a facebook group for Bay Area Rationalist Social Events.

Softskill-sharing Groups (Mike Plotz and Jonathan Wallis)

The leadership styles discussion led to the concept that in order to have a flourishing community, and to be a successful leader, it's valuable to make yourself legible to others, and others more legible to yourself. Even small improvements in an activity as frequent as communication can have huge effects over time, as we make it easier to see each other as we actually are and to clearly exchange our ideas. 

A number of people wanted to improve in this area together, and so we’re working towards establishing a series of workshops with a focus on practice and individual feedback. A longer post on why this is important is coming up, and there will be information on the structure of the event after our first teacher’s meeting. If you would like to help out or participate, please fill out this poll:

https://goo.gl/forms/MzkcsMvD2bKzXCQN2

Circling Explorations (Qiaochu and others)

Much of the discussion at the Unconference, while focused on community, ultimately was explored through an intellectual lens. By contrast, "Circling" is a practice developed by the Authentic Relating community which is focused explicitly on feelings. The basic premise is (sort of) simple: you sit in a circle in a secluded space, and you talk about how you're feeling in the moment. Exactly how this plays out is a bit hard to explain, but the intended result is to become better both at noticing your own feelings and the people around you.

Opinions were divided as to whether this was something that made sense for "rationalists to do on their own", or whether it made more sense to visit more explicitly Circling-focused communities, but several people expressed interest in trying it again.

Making Helping Fun and More Accessible (Suggested by Oliver Habryka)

Ultimately we want a lot of people who are able and excited to help out with challenging projects - to improve our collective group ambition. But to get there, it'd be really helpful to have "gateway helping" - things people can easily pitch in to do that are fun, rewarding, clearly useful but on the "warm fuzzies" side of helping. Oliver suggested this as a way to get people to start identifying as people-who-help.

There were two main sets of habits that worth cultivating:

1) Making it clear to newcomers that they're encouraged to help out with events, and that this is actually a good way to make friends and get more involved. 

2) For hosts and event planners, look for opportunities to offer people things that they can help with, and make sure to publicly praise those who do help out.

Some of this might dovetail nicely with the Welcoming Committee, both as something people can easily get involved with, and if there ends up being a public facing website to introduce people to the community, using that to connect people with events that could use help).

Volunteering-as-Learning, and Big Event Specific Workshops

Sometimes volunteering just requires showing up. But sometimes it requires special skills, and some events might need people who are willing to practice beforehand or learn-by-doing with a commitment to help at multiple events.

A vague cluster of skills that's in high demand is "predict logistical snafus in advance to head them off, and notice logistical snafus happening in realtime so you can do something about them." Earlier this year there was an Ops Workshop that aimed to teach this sort of skill, which went reasonably but didn't really lead into a concrete use for the skills to help them solidify.

One idea was to do Ops workshops (or other specialized training) in the month before a major event like Solstice or EA Global, giving them an opportunity to practice skills and making that particular event run smoother.

(This specific idea is not currently planned for implementation as it was among the more ambitious ones, although Brent Dill's series of "practice setting up a giant dome" beach parties in preparation for Burning Man are pointing in a similar direction)

Making Sure All This Actually Happens (Sarah Spikes, and hopefully everyone!)

To avoid the trap of dreaming big and not actually getting anything done, Sarah Spikes volunteered as project manager, creating an Asana page. People who were interested in committing to a deadline could opt into getting pestered by her to make sure things things got done. 

V. Parting Words

To wrap up the event, I focused on some final concepts that underlie this whole endeavor. 

The thing we're aiming for looks something like this:

In a couple months (hopefully in July), there'll be a followup unconference. The theme will be "Innovation and Excellence", addressing the twofold question "how do we encourage more people to start cool projects", and "how to do we get to a place where longterm projects ultimately reach a high quality state?"

Both elements feel important to me, and they require somewhat different mindsets (both on the part of the people running the projects, and the part of the community members who respond to them). Starting new things is scary and having too high standards can be really intimidating, yet for longterm projects we may want to hold ourselves to increasingly high standards over time.

My current plan (subject to lots of revision) is for this to become a series of community unconferences that happen roughly every 3 months. The Bay area is large enough with different overlapping social groups that it seems worthwhile to get together every few months and have an open-structured event to see people you don't normally see, share ideas, and get on the same page about important things.

Current thoughts for upcoming unconference topics are:

Innovation and Excellence
Personal Epistemic Hygiene
Group Epistemology

An important piece of each unconference will be revisiting things at the previous one, to see if projects, ideas or experiments we talked about were actually carried out and what we learned from them (most likely with anonymous feedback collected beforehand so people who are less comfortable speaking publicly have a chance to express any concerns). I'd also like to build on topics from previous unconferences so they have more chance to sink in and percolate (for example, have at least one talk or discussion about "empathy and trust as related to epistemic hygiene").

Starting and Finishing Unconferences Together

My hope is to get other people involved sooner rather than later so this becomes a "thing we are doing together" rather than a "thing I am doing." One of my goals with this is also to provide a platform where people who are interested in getting more involved with community leadership can take a step further towards that, no matter where they currently stand (ranging anywhere from "give a 30 second lightning talk" to "run a discussion, or give a keynote talk" to "be the primary organizer for the unconference.")

I also hope this is able to percolate into online culture, and to other in-person communities where a critical mass of people think this'd be useful. That said, I want to caution that I consider this all an experiment, motivated by an intuitive sense that we're missing certain things as a culture. That intuitive sense has yet to be validated in any concrete fashion. I think "willingness to try things" is more important than epistemic caution, but epistemic caution is still really important - I recommend collecting lots of feedback and being willing to shift direction if you're trying anything like the stuff suggested here.

(I'll have an upcoming post on "Ways Project Hufflepuff could go horribly wrong")

Most importantly, I hope this provides a mechanism for us to collectively take ideas more seriously that we're ostensibly supposed to be taking seriously. I hope that this translates into the sort of culture that The Craft and The Community was trying to point us towards, and, ideally, eventually, a concrete sense that our community can play a more consistently useful role at making sure the world turns out okay. 

If you have concerns, criticism, or feedback, I encourage you to comment here if you feel comfortable, or on the Unconference Feedback Form. So far I've been erring on the side of move forward and set things in motion, but I'll be shifting for the time being towards "getting feedback and making sure this thing is steering in the right direction."

-

In addition to the people listed throughout the post, I'd like to give particular thanks to Duncan Sabien for general inspiration and a lot of concrete help, Lahwran for giving the most consistent and useful feedback, and Robert Lecnik for hosting the space. 

Thought experiment: coarse-grained VR utopia

15 cousin_it 14 June 2017 08:03AM

I think I've come up with a fun thought experiment about friendly AI. It's pretty obvious in retrospect, but I haven't seen it posted before. 

When thinking about what friendly AI should do, one big source of difficulty is that the inputs are supposed to be human intuitions, based on our coarse-grained and confused world models. While the AI's actions are supposed to be fine-grained actions based on the true nature of the universe, which can turn out very weird. That leads to a messy problem of translating preferences from one domain to another, which crops up everywhere in FAI thinking, Wei's comment and Eliezer's writeup are good places to start.

What I just realized is that you can handwave the problem away, by imagining a universe whose true nature agrees with human intuitions by fiat. Think of it as a coarse-grained virtual reality where everything is built from polygons and textures instead of atoms, and all interactions between objects are explicitly coded. It would contain player avatars, controlled by ordinary human brains sitting outside the simulation (so the simulation doesn't even need to support thought).

The FAI-relevant question is: How hard is it to describe a coarse-grained VR utopia that you would agree to live in?

If describing such a utopia is feasible at all, it involves thinking about only human-scale experiences, not physics or tech. So in theory we could hand it off to human philosophers or some other human-based procedure, thus dealing with "complexity of value" without much risk. Then we could launch a powerful AI aimed at rebuilding reality to match it (more concretely, making the world's conscious experiences match a specific coarse-grained VR utopia, without any extra hidden suffering). That's still a very hard task, because it requires solving decision theory and the problem of consciousness, but it seems more manageable than solving friendliness completely. The resulting world would be suboptimal in many ways, e.g. it wouldn't have much room for science or self-modification, but it might be enough to avert AI disaster (!)

I'm not proposing this as a plan for FAI, because we can probably come up with something better. But what do you think of it as a thought experiment? Is it a useful way to split up the problem, separating the complexity of human values from the complexity of non-human nature?

Bi-Weekly Rational Feed

15 deluks917 28 May 2017 05:12PM

Five Recommended Articles You Might Have Missed:

The Four Blind Men The Elephant And Alan Kay by Meredith Paterson (Status 451) - Managing technical teams. Taking a new perspective is worth 90 IQ points. Getting better enemies. Guerrilla action.

Vast Empirical Literature by Marginal REVOLUTION - Tyler's 10 thoughts on approaching fields with large literatures. He is critical of Noah's "two paper rule" and recommends alot of reading.

Notes From The Hufflepuff Unconference (Part 1) by Raemon (lesswrong) - Goal: Improve at: "social skills, empathy, and working together, sticking with things that need sticking with". The article is a detailed breakdown of the unconference including: Ray's Introductory Speech, a long list of what people want to improve on, the lightning talks, the 4 breakout sessions, proposed solutions, further plans, and closing words. Links to conference notes are included for many sections.

Antipsychotics Might Cause Cognitive Impairment by Sarah Constantin (Otium) - A harrowing personal account of losing abstract thinking ability on Risperdal. The author conducts a literature review, and concludes with some personal advice about taking medication.

Dwelling In Possibility by Sarah Constantin (Otium) - Leadership. Confidence in the face of the uncertainty and imperfection. Losing yourself when you try to step back and facilitate.

Scott:

Those Modern Pathologies by Scott Alexander - You can argue X is a modern pathology for almost any value of X. Scott demonstrates this by repeated example. Among other things "Aristotelian theory of virtue" and "Homer's Odyssey" get pathologized.

The Atomic Bomb Considered As Hungarian High School Science Fair Project by Scott Alexander - Ashkenazi Jewish Intelligence. An explanation of Hungarian dominance in physics and science in the mid 1900s.

Classified Ads Thread by Scott Alexander - Open thread where people post ads. People are promoting their websites and some of them are posting actual job ads among other things.

Open Thread 76 by Scott Alexander - Bi-weekly Open thread.

Postmarketing Surveillance Is Good And Normal by Scott Alexander - Scott shows why a recent Scientific American study does not imply the FDA is too risky.

Epilogue by Scott Alexander (Unsong) - All's Whale that Ends Whale.

Polyamory Is Not Polygyny by Scott Alexander - A quick review of how polyamory actually function in the rationalist community.

Bail Out by Scott Alexander - "About a fifth of the incarcerated population – the top of the orange slice, in this graph – are listed as “not convicted”. These are mostly people who haven’t gotten bail. Some are too much of a risk. But about 40% just can’t afford to pay."

Rationalist:

Strong Men Are Socialist Reports A Study That Previously Reported The Opposite by Jacob Falkovich (Put A Number On It!) - Defense Against the Dark Statistical Arts. Jacob provides detailed commentary on a popular study and shows that the studies dataset can be used to support the opposite conclusion, with p = 0.0086.

Highly Advanced Tulpamancy 101 For Beginners by H i v e w i r e d - Application of lesswrong theory to the concept of the self. In particular the author applies "How an Algorithm Feels from the Inside" and "Map and Territory". Hive then goes into the details of creating and interacting with tulpas. "A tulpa is an autonomous entity existing within the brain of a “host”. They are distinct from the host in that they possess their own personality, opinions, and actions"

Existential Risk From Ai Without An Intelligence by Alex Mennen (lesswrong) - Reasons why an intelligence explosion might not occur and reasons why we might have a problem anyway.

Dragon Army Theory Charter (30min Read) by Duncan Sabien (lesswrong) - A detailed plan for an ambitious military style rationalist house. The major goals include self-improvement, high quality group projects and the creation of a group with absolute trust in one another. The leader of the house is the curriculum director and head of product at CFAR.

The Story Of Our Life by H i v e w i r e d - The authors explain their pre-rationalist life and connection to the community. They then argue the rationalist community should take better care of one another. "Venture Rationalism".

Don't Believe in God by Tyler Cowen - Seven arguments for not believing in God. Among them: Lack of Bayesianism among believers, the degree to which people follow their family religion and the fundamental weirdness of reality.

Antipsychotics Might Cause Cognitive Impairment by Sarah Constantin (Otium) - A harrowing personal account of losing abstract thinking ability on Risperdal. The author conducts a literature review, and concludes with some personal advice about taking medication.

The Four Blind Men The Elephant And Alan Kay by Meredith Paterson (Status 451) - Managing technical teams. Taking a new perspective is worth 90 IQ points. Getting better enemies. Guerrilla action.

Qualia Computing At Consciousness Hacking June 7th 2017 by Qualia Computing - Qualia computing will present in San Fransisco on June 7th at Consciousness Hacking. The event description is detailed and should give readers a good intro to Qualia Computing's goals. The author's research goal is to create a mathematical theory of pain/pleasure and be able to measure these directly from brain data.

Notes From The Hufflepuff Unconference (Part 1) by Raemon (lesswrong) - Goal: Improve at: "social skills, empathy, and working together, sticking with things that need sticking with". The article is a detailed breakdown of the unconference including: Ray's Introductory Speech, a long list of what people want to improve on, the lightning talks, the 4 breakout sessions, proposed solutions, further plans, and closing words. Links to conference notes are included for many sections.

Is Silicon Valley Real by Ben Hoffman (Compass Rose) - The old culture of Silicon Valley is mostly gone, replaced by something overpriced and materialist. Ben check's the details of Scott Alexander's list of six noble startups and finds only two in SV proper.

Why Is Harry Potter So Popular by Ozy (Thing of Things) - Ozy discusses a paper on song popularity in an artificial music market. Social dynamics had a big impact on song ratings. "Normal popularity is easily explicable by quality. Stupid, wild, amazing popularity is due to luck."

Design A Better Chess by Robin Hanson - Can we design a game that promotes even more useful honesty than chess? A link to Hanson's review of Gary Kasparov's book is included.

Deserving Truth 2 by Andrew Critch - How the author's values changed over time. Originally he tried to maximize his own positive sensory experiences. The things he cared about began to include more things, starting with his GF's experiences and values. He eventually rejects "homo-economus" thinking.

A Theory Of Hypocrisy by João Eira (Lettuce be Cereal) - Hypocrisy evolved as a way to solve free rider problems. "It pays to be a free rider. If no one finds out"

Building Community Institution In Five Hours a Week by Particular Virtue - Eight pieces of advice for running a successful meetup. The author and zir partner have been running lesswrong events for five years.

Dwelling In Possibility by Sarah Constantin (Otium) - Leadership. Confidence in the face of the uncertainty and imperfection. Losing yourself when you try to step back and facilitate.

Ai Safety Three Human Problems And One Ai Issue by Stuart Armstrong (lesswrong) - Humans have poor predictions, don't know their values and aren't agents. Ai might be very powerful. A graph of which problems many Ai risk solutions target.

Recovering From Failure by mindlevelup - Avoid negative spirals, figure out why you failed, List of questions to ask yourself. Strategies -> Generate good alternatives, metacognitive affordances.

Review The Dueling Neurosurgeons by Sam Kean by Aceso Under Glass - Positive review. Author learned alot. Speculation on a better way to teach Science.

Principia Qualia Part 2: Valence by Qualia Computing - A mathematical theory of valence (what makes experience feel good or bad). Speculative but the authors make concrete predictions. Music plays a heavy role.

Im Not Seaing It by Robin Hanson - Arguments against seasteading.

EA:

One of the more positive surprises by GiveDirectly - Links post. Eight articles on Give Directly, Cash Transfer and Basic Income.

Returns Functions And Funding Gaps by the Center for Effective Altruism (EA forum) - Links to CEA's explanation of what "returns functions" are and how using them compares to "funding gap" model. They give some arguments why returns functions are a superior model.

Online Google Hangout On Approaches To by whpearson (lesswrong) - Community meeting to discuss Ai risk. Will use "Optimal Brainstorming Theory". Currently early stage. Sign up and vote on what times you are available.

Expected Value Estimates We Cautiously Took by The Oxford Prioritization Project (EA forum) - Details of how the four bayesian probability models were compared to produce a final decision. Some discussion of how assumptions affect the final result. Actual code is included.

Four Quantitative Models Aggregation And Final by The Oxford Prioritization Project (EA forum) - 80K hours, MIRI, Good Foods Institute and StrongMinds were considered. Decisions were made using concrete Bayesian EV calculations. Links to the four models are included.

Peer to Peer Aid: Cash in the News by GiveDirectly - 8 Links about GiveDirectly, cash transfer and basic income.

The Value Of Money Going To Different Groups by The Center for Effective Altruism - "It is well known that an extra dollar is worth less when you have more money. This paper describes the way economists typically model that effect, using that to compare the effectiveness of different interventions. It takes remittances as a particular case study."

Politics and Economics:

Study Of The Week Better And Worse Ways To Attack Entrance Exams by Freddie deBoer - Freddie's description of four forms of "test validity". The SAT and ACT are predictive of college grades, one should criticize them from other angles. Freddie briefly gives his socialist critique.

How To Destroy Civilization by Zvi Moshowitz - A parable about the game "Advanced Civilization". The difficulties of building a coalition to lock out bad actor. Donald Trump. [Extremely Partisan]

Trust Assimilation by Bryan Caplan - Data on how much immigrants and their children trust other people. How predictive is the trust level of their ancestral country. Caplan reviews papers and crunches the numbers himself.

There Are Bots, Look Around by Renee DiResta (ribbonfarm) - High frequency trading disrupted finance. Now algorithms and bots are disrupting the marketplace of ideas. What can finance's past teach us about politics' future?

The Behavioral Economics of Paperwork by Bryan Caplan - Vast Numbers of students miss financial aid because they don't fill out paperwork. Caplan explores the economic implications of the fact that "Humans hate filling out paperwork. As a result, objectively small paperwork costs plausibly have huge behavioral response".

The Nimby Challenge by Noah Smith - Smith Argues makes an economic counterargument to the claims that building more housing wouldn't lower prices. Noah includes 6 lessons for engaging with NIMBYs.

Study Of The Week What Actually Helps Poor Students: Human Beings by Freddie deBoer - Personal feedback, tutoring and small group instruction had the largest positive effect. Includes Freddie's explanation of meta-analysis.

Vast Empirical Literature by Marginal REVOLUTION - Tyler's 10 thoughts on approaching fields with large literatures. He is critical of Noah's "two paper rule" and recommends alot of reading.

Impact Housing Price Restrictions by Marginal REVOLUTION - Link to a job market paper on the economic effects of housing regulation.

Me On Anarcho Capitalism by Bryan Caplan - Bryan is interviewed on the Rubin Report about Ancap.

Campbells Law And The Inevitability Of School Fraud by Freddie deBoer - Rampant Grade Inflation. Lowered standards. Campbell's law says that once you base policy on a metric that metric will always start being gamed

Nimbys Economic Theories: Sorry Not Sorry by Phil (Gelman's Blog) - Gelman got a huge amount of criticism on his post on whether building more housing will lower prices in the Bay. He responds to some of the criticism here. Long for Gelman.

Links 8 by Artir (Nintil) - Link Post. Physics, Technology, Philosophy, Economics, Psychology and Misc.

Arguing About How The World Should Burn by Sonya Mann ribbonfarm - Two different ways to decide who to exclude. One focuses on process the other on content. Scott Alexander and Nate Soares are quoted. Heavily [Culture War].

Seeing Like A State by Bayesian Investor - A quick review of "Seeing like a state".

Whats Up With Minimum Wage by Sarah Constantin (Otium) - A quick review of the literature on the minimum wage. Some possible explanations for why raising it not reduce unemployment.

Misc:

Entirely Too Many Pieces Of Unsolicited Advice To Young Writer Types by Feddie deBoer - Advice about not working for free, getting paid, interacting with editors, why 'Strunk and White' is awful, and taking writing seriously.

Conversations On Consciousness by H i v e w i r e d - The author is a plural system. Their hope is to introduce plurality by doing the following: "First, we’re each going to describe our own personal experiences, from our own perspectives, and then we’re going to discuss where we might find ourselves within the larger narrative regarding consciousness."

Notes On Debugging Clojure Code by Eli Bendersky - Dealing with Clojure's cryptic exceptions, Finding which form an exception comes from, Trails and Logging, Deeper tracing inside cond forms

How to Think Scientifically About Scientists’ Proposals for Fixing Science by Andrew Gelman - Gelman asks how to scientifically evaluate proposals to fix science. He considers educational, statistical, research practice and institutional reforms. Excerpts from an article Gelman wrote, the full paper is linked.

Call for Volunteers who Want to Exercize by Aceso Under Glass - Author is looking for volunteers who want to treat their anxiety or mood disorder with exercise.

Learning Deep Learning the Easy Way with Keras (lesswrong) - Articles showing the power of neural networks. Discussion of ML frameworks. Resources for learning.

Unsong of Unsongs by Scott Aaronson - Aaronson went to the Unsong wrap party. A quick review of Unsong. Aaronson talks about how Scott Alexander defended him with untitled.

2016 Spending by Mr. Money Mustache - Full details of last year's budget. Spending broken down by category.

Amusement:

And Another Physics Problem by protokol2020 - Two Planets. Which has a higher average surface temperature.

A mysterious jogger by Jacob Falkovich (Put A Number On It!) - A mysterious jogger. Very short fiction.

Podcast:

Persuasion And Control by Waking Up with Sam Harris - "surveillance capitalism, the Trump campaign's use of Facebook, AI-enabled marketing, the health of the press, Wikileaks, ransomware attacks, and other topics."

Raj Chetty: Inequality, Mobility and the American Dream by Conversations with Tyler - "As far as I can tell, this is the only coverage of Chetty that covers his entire life and career, including his upbringing, his early life, and the evolution of his career, not to mention his taste in music"

Is Trump's incompetence saving us from his illiberalism? by The Ezra Klein Show - Political Scientist Yascha Mounk. "What Mounk found is that the consensus we thought existed on behalf of democracy and democratic norms is weakening."

The Moral Complexity Of Genetics by Waking Up with Sam Harris - "Sam talks with Siddhartha Mukherjee about the human desire to understand and manipulate heredity, the genius of Gregor Mendel, the ethics of altering our genes, the future of genetic medicine, patent issues in genetic research, controversies about race and intelligence, and other topics."

Ester Perel by The Tim Ferriss - The Relationship Episode: Sex, Love, Polyamory, Marriage, and More

Lane Pritchett by Econtalk - Growth, and Experiments

Meta Learning by Tim Ferriss - Education, accelerated learning, and my mentors. Conversation with Charles Best the founder and CEO of DonorsChoose.org

Bryan Stevenson On Why The Opposite Of Poverty Isn't Wealth by The Ezra Klein Show - Founder and executive director of the Equal Justice Initiative. Justice for the wrongly convicted on Death Row.

Thoughts on civilization collapse

15 Stuart_Armstrong 04 May 2017 10:41AM

Epistemic status: an idea I believe moderately strongly, based on extensive reading but not rigorous analysis.

We may have a dramatically wrong idea of civilization collapse, mainly inspired by movies that obsess over dramatic tales of individual heroism.

 

Traditional view:

In a collapse, anarchy will break out, and it will be a war of all against all or small groups against small groups. Individual weaponry (including heavy weapons) and basic food production will become paramount; traditional political skills, not so much. Government collapse is long term. Towns and cities will suffer more than the countryside. The best course of action is to have a cache of weapons and food, and to run for the hills.

 

Alternative view:

In a collapse, people will cling to their identified tribe for protection. Large groups will have no difficulty suppressing or taking over individuals and small groups within their areas of influence. Individual weaponry may be important (given less of a police force), but heavy weaponry will be almost irrelevant as no small group will survive alone. Food production will be controlled by the large groups. Though the formal "government" may fall, and countries may splinter into more local groups, government will continue under the control of warlords, tribal elders, or local variants. Cities, with their large and varied-skill workforce, will suffer less than the countryside. The best course of action is to have a stash of minor luxury goods (solar-powered calculators, comic books, pornography, batteries, antiseptics) and to make contacts with those likely to become powerful after a collapse (army officers, police chiefs, religious leaders, influential families).

Possible sources to back up this alternative view:

  • The book Sapiens argues that governments and markets are the ultimate enablers of individualism, with extended-family-based tribalism as the "natural" state of humanity.
  • The history of Somalia demonstrates that laws and enforcement continue even after a government collapse, by going back to more traditional structures.
  • During China's period of anarchy, large groups remained powerful: the nationalists, the communists, the Japanese invaders. The other sections of the country were generally under the control of local warlords.
  • Rational Wiki argues that examples of collapse go against the individualism narrative.

 

Mode Collapse and the Norm One Principle

14 tristanm 05 June 2017 09:30PM

[Epistemic status: I assign a 70% chance that this model proves to be useful, 30% chance it describes things we are already trying to do to a large degree, and won't cause us to update much.] 

I'm going to talk about something that's a little weird, because it uses some results from some very recent ML theory to make a metaphor about something seemingly entirely unrelated - norms surrounding discourse. 

I'm also going to reach some conclusions that surprised me when I finally obtained them, because it caused me to update on a few things that I had previously been fairly confident about. This argument basically concludes that we should adopt fairly strict speech norms, and that there could be great benefit to moderating our discourse well. 

I argue that in fact, discourse can be considered an optimization process and can be thought of in the same way that we think of optimizing a large function. As I will argue, thinking of it in this way will allow us to make a very specific set of norms that are easy to think about and easy to enforce. It is partly a proposal for how to solve the problem of dealing with speech that is considered hostile, low-quality, or otherwise harmful. But most importantly, it is a proposal for how to ensure that the discussion always moves in the right direction: Towards better solutions and more accurate models. 

It will also help us avoid something I'm referring to as "mode collapse" (where new ideas generated are non-diverse and are typically characterized by adding more and more details to ideas that have already been tested extensively). It's also highly related to the concepts discussed in the Death Spirals and the Cult Attractor portion of the Sequences. Ideally, we'd like to be able to make sure that we're exploring as much of the hypothesis space as possible, and there's good reason to believe we're probably not doing this very well.  

The challenge: Making sure we're searching for the global optimum in model-space sometimes requires reaching out blindly into the frontiers, the not well-explored regions, which runs the risk of ending up somewhere very low-quality or dangerous. There are also sometimes large gaps between very different regions of model-space where the quality of the model is very low in-between, but very high on each side of the gap. This requires traversing through potentially dangerous territory and being able to survive the whole way through.

(I'll be using terms like "models" and "hypotheses" quite often, and I hope this isn't confusing. I am using them very broadly, to refer to both theoretical understandings of phenomenon and blueprints for practical implementations of ideas). 

We desire to have a set of principles which allows us to do this safely - to think about models of the world that are new and untested, solutions for solving problems that have never been done in a similar way - and they should ensure that, eventually, we can reach the global optimum. 

Before we derive that set of principles, I am going to introduce a topic of interest from the field of Machine Learning. This topic will serve as the main analogy for the rest of this piece, and serve as a model for how the dynamics of discourse should work in the ideal case. 

I. The Analogy: Generative Adversarial Networks

For those of you who are not familiar with the recent developments in deep-learning, Generative Adversarial Networks (GANs)[intro pdf here] are a new type of generative model class that are ideal for producing high-quality samples from very high-dimensional, complex distributions. They have caused great buzz and hype in the deep-learning community due to how impressive some of the samples they produce are, and how efficient they are at generation.

Put simply, a generator model and a critic (sometimes called a discriminator) model perform a two-player game where the critic is trained to distinguish between samples produced by the generator and the "true" samples taken from the data distribution. In turn, the generator is trained to maximize the critic's loss function. Both models are usually parametrized by deep neural networks and can be trained by taking turns running a gradient descent step on each. The Nash equilibrium of this game is when the generator's distribution matches that of the data distribution perfectly. This is never really borne out in practice, but sometimes it gets so close that we don't mind. 

GANs have one principal failure mode, which is often thought to be due to the instability of the system, which is often called "mode collapse" (a term I'm going to appropriate to refer to a much broader concept). It was often believed that, if a careful balance between the generator and critic could not be maintained, one would eventually overpower the other - leading the critic to provide either useless or overly harsh information to the generator. Useless information will cause the generator to update very slowly or not at all, and overly harsh information will lead the samples to "collapse" to a small region of the data space that are the easiest targets for the generator to hit.  

This problem was essentially solved earlier this year due to a series of papers that propose modifications to the loss functions that GANs use, and, most crucially, add another term to the critic's loss which stabilizes the gradient (with respect to the inputs) to have a norm close to one. It was recognized that we actually desire an extremely powerful critic so that the generator can make the best updates it possibly can, but the updates themselves can't go beyond what the generator is capable of handling. With these changes to the GAN formulation, it became possible to use crazy critic networks such as ultra-deep ResNets and train them as much as desired before updating the generator network.  

The principle behind their operation is rather simple to describe, but unfortunately, it is much more difficult to explain why they work so well. However, I believe that as long as we know how to make one, and know specific implementation details that improve their stability, then I believe their principles can be applied more broadly to achieve success in a wide variety of regimes. 

II. GANs as a Model of Discourse

In order to use GANs as a tool for conceptual understanding of discourse, I propose to model of the dynamics of debate as a collection of hypothesis-generators and hypothesis-critics. This could be likened to the structure of academia - researchers publish papers, they go through peer-review, the work is iterated on and improved - and over time this process converges to more and more accurate models of reality (or so we hope). Most individuals within this process play both roles, but in theory this process would still work even if they didn't. For example, Isaac Newton was a superb hypothesis generator, but he also had some wacky ideas that most of us would consider to be obviously absurd. Nevertheless, calculus and Newtonian physics became a part of our accepted scientific knowledge, and alchemy didn't. The system adopted and iterated on his good ideas while throwing away the bad. 

Our community should be capable of something similar, while doing it more efficiently and not requiring the massive infrastructure of academia. 

A hypothesis-generator is not something that just randomly pulls out a model from model-space. It proposes things that are close modifications of things it already holds to be likely within its model (though I expect this point to be debatable). Humans are both hypothesis-generators and hypothesis-critics. And as I will argue, that distinction is not quite as sharply defined as one would think. 

I think there has always been an underlying assumption within the theory of intelligence that creativity and recognition / distinction are fundamentally different. In other words, one can easily understand Mozart to be a great composer, but it is much more difficult to be a Mozart. Naturally this belief entered it's way into the field of Artificial Intelligence too, and became somewhat of a dogma. Computers might be able to play Chess, they might be able to play Go, but they aren't doing anything fundamentally intelligent. They lack the creative spark, they work on pure brute-force calculation only, with maybe some heuristics and tricks that their human creators bestowed upon them.  

GANs seem to defy this principle. Trained on a dataset of photographs of human faces, a GAN generator learns to produce near-photo-realistic images that nonetheless do not fully match any the faces the critic network saw (one of the reasons why CelebA was such a good choice to test these on), and are therefore in some sense producing things which are genuinely original. It may have once been thought that there was a fundamental distinction between creation and critique, but perhaps that's not really the case. GANs were a surprising discovery, because they showed that it was possible to make impressive "creations" by starting from random nonsense and slowly tweaking it in the direction of "good" until it eventually got there (well okay, that's basically true for the whole of optimization, but it was thought to be especially difficult for generative models).

What does this mean? Could someone become a "Mozart" by beginning a musical composition from random noise and slowly tweaking it until it became a masterpiece?

The above seems to imply "yes, perhaps." However, this is highly contingent on the quality of the "tweaking." It seems possible only as long as the directions to update in are very high quality. What if they aren't very high quality? What if they point nowhere, or in very bad directions?

I think the default distribution of discourse is that it is characterized by a large number of these directionless, low quality contributions. And that it's likely that this is one of the main factors behind mode collapse. This is related to what has been noted before: Too much intolerance for imperfect ideas (or ideas outside of established dogma) in a community prevent useful tasks from being accomplished, and progress from being made. Academia does not seem immune to this problem. Where low-quality or hostile discussion is tolerated is where this risk is greatest.  

Fortunately, making sure we get good "tweaks" seems to be the easy part. Critique is in high abundance. Our community is apparently very good at it. We also don't need to worry much about the ratio of hypothesis-generators to hypothesis-critics, as long as we can establish good principles that allow us to follow GANs as closely as possible. The nice feature of the GAN formulation is that you are allowed to make the critic as powerful as you want. In fact, the critic should be more powerful than the generator (If the generator is too powerful, it just goes directly to the argmax of the critic). 

(In addition, any collection of generators is a generator, and any collection of critics is a critic. So this formulation can be applied to the community setting).

III. The Norm One Principle

So the question then becomes, how do we take an algorithm governing a game between models much simpler than a human, and use the same tweaks which consist of nothing more than a few very simple equations? 

Here what I devise is a strategy for taking the concept of the norm of the critic gradient being as close to one as possible, and using that as a heuristic for how to structure appropriate discourse. 

(This is where my argument gets more speculative and I expect to update this a lot, and where I welcome the most criticism).

What I propose is that we begin modeling the concept of "criticism" based on how useful it is to the idea-generator receiving the criticism. Under this model, I think we should start breaking down criticism into two fundamental attributes:

  1. Directionality - does the criticism contain highly useful information, such that the "generator" knows how to update their model / hypothesis / proposal?
  2. Magnitude - Is the criticism too harsh, does it point to something completely unlike the original proposal, or otherwise require changes that aren't feasible for the generator to make?

My claim is that any contribution to a discussion should satisfy the "Norm One Principle." In other words, it should have a well-defined direction, and the quantity of change should be feasible to implement.

If a critique can satisfy our requirements for both directionality and magnitude, then it serves a useful purpose. The inverse claim to this is that if we can't follow these requirements, we risk falling into mode collapse, and the ideas commonly proposed are almost indistinguishable from the ones which preceded them, and ideas which deviate too far from the norm are harshly condemned and suppressed. 

I think it's natural to question whether or not restricting criticism to follow certain principles is a form of speech suppression that prevents useful ideas from being considered. But the pattern I'm proposing doesn't restrict the "generation" process, the creative aspect which produces new hypotheses. It doesn't restrict the topics that can be discussed. It only restricts the criticism of those hypotheses, such that they are maximally useful to the source of the hypothesis. 

One of the primary fears behind having too much criticism is that it discourages people from contributing because they want to avoid the negative feedback. But under the Norm One Principle, I think it is useful to distinguish between disagreement and criticism. I think if we're following these norms properly, we won't need to consider criticism to be a negative reward. In fact, criticism can be positive. Agreement could be considered "criticism in the same direction you are moving in." Disagreement would be the opposite. And these norms also eliminate the kind of feedback that tends to be the most discouraging. 

For example, some things which violate "Norm One":

  • Ad hominem attacks (typically directionless). 
  • Affective Death Spirals (unlimited praise or denunciation is usually directionless, and usually very high magnitude). 
  • Signs that cause aversion (things I "don't like", that trigger my System 1 alarms, which probably violates both directionality and magnitude). 
  • Lengthy lists of changes to make (norm greater than 1, ideally we want to try to focus on small sets of changes that have the highest priority). 
  • Repetition of points that have already been made (norm greater than one). 

One of my strongest hopes is that whomever is playing the part of the "generator" is able to compile the list of critiques easily and use them to update somewhere close to the optimal direction. This would be difficult if the sum of all critiques is either directionless (many critics point in opposite or near-opposite directions) or very high-magnitude (Critics simply say to get as far away from here as possible). 

But let's suppose that each individual criticism satisfies the Norm One principle. We will also assume that the generator is weighing each critique by their respect for whoever produced it, which I think is highly likely. Then the generator should be able to move in a direction unless the sum of the directions completely cancel out. It is unlikely for this to happen - unless there is very strong epistemic disagreement in the community over some fundamental assumptions (in which case the conversation should probably move over to that). 

In addition, it also becomes less likely for the directions to cancel out as the number of inputs increases. Thus, it seems that proposals for new models should be presented to a wide audience, and we should avoid the temptation to keep our proposals hidden to all except for a small set of people we trust.

So I think that in general, this proposed structure should tend to increase the amount of collective trust we have in the community, and that it favors transparency and favors diversity of viewpoints. 

But what of the possible failure modes of this plan? 

This model should fail if the specific details of its implementation either remove too much discussion, or fail to deal with individuals who refuse to follow the norms and refuse to update. Any implementation should allow room for anyone to update. Someone who posts an extremely hostile, directionless comment should be allowed chances to modify their contribution. The only scenario in which the "banhammer" becomes appropriate is when this model fails to apply: The cardinal sin of rationality, the refusal to update. 

IV. Building the Ideal "Generator"

As a final point, I'll note that the above assumes that generators will be able to update their models incrementally. The easy part, as I mentioned, was obtaining the updates, the hard part is accumulating them. This seems difficult with the infrastructure we have in place. What we do have is a good system for posting proposals and receiving feedback (The blog post / comment thread set-up), but this assumes that each "generator" is keeping track of their models by themselves and has to be fully aware of the status of other models on their own. There is no centralized "mixture model" anywhere that contains the full set of models weighted by how much probability they are given by the community. Currently, we do not have a good solution for this problem. 

However, it seems that the first conception of Arbital was centered around finding a solution to this kind of problem:

Arbital has bigger ambitions than even that. We all dream of a world that eliminates the duplication of effort in online argument - a world where, the same way that Wikipedia centralized the recording of definite facts, an argument only needs to happen once, instead of being reduplicated all over the Internet; with all the branches of the argument neatly recorded in the same place, along with some indication of who believes what. A world where 'just check Arbital' had the same status for determining the current state of debates, as 'just check Wikipedia' now has when somebody starts arguing about the population of Melbourne. There's entirely new big subproblems and solutions, not present at all in the current Arbital, that we'd need to tackle that considerably more difficult problem. But to solve 'explaining things' is something of a first step. If you have a single URL that you can point anyone to for 'explaining Bayes', and if you can dispatch people to different pages depending on how much math they know, you're starting to solve some of the key subproblems in removing the redundancy in online arguments.

If my proposed model is accurate, then it suggests that the problem Arbital aims to solve is in fact quite crucial to solve, and that the developers of Arbital should consider working through each obstacle they face without pivoting from this original goal. I feel confident enough that this goal should be high priority that I'd be willing to support its development in whatever way is deemed most helpful and is feasible for me (I am not an investor, but I am a programmer and would also be capable of making small donations, or contributing material). 

The only thing that this model would require for Arbital to do would be to make it as open as possible to contribute, and then perform heavy moderation or filtering of contributed content (but importantly not the other way around, where it is closed to small group of trusted people).

Currently, the incremental changes that would have to be made to LessWrong and related sites like SSC would simply be increased moderation of comment quality. Otherwise, any further progress on the problem would require overcoming much more serious obstacles requiring significant re-design and architecture changes. 

Everything I've written above is also subject to the model I've just outlined, and therefore I expect to make incremental updates as feedback to this post accrues.

My initial prediction for feedback to this post is that the ideas might be considered helpful and offer a useful perspective or a good starting point, but that there are probably many details that I have missed that would be useful to discuss, or points that were not quite well-argued or well thought-out. I will look out for these things in the comments.   

[Link] Reality has a surprising amount of detail

14 jsalvatier 13 May 2017 08:02PM

Akrasia Tactics Review 3: The Return of the Akrasia

14 malcolmocean 10 April 2017 03:05PM

About three and a half years ago, polutropon ran an akrasia tactics review, following the one orthonormal ran three and a half years prior to that: an open-ended survey asking Less Wrong posters to give numerical scores to productivity techniques that they'd tried, with the goal of getting a more objective picture of how well different techniques work (for the sort of people who post here). Since it's been years since the others and the rationality community has grown and developed significantly while retaining akrasia/motivation/etc as a major topic, I thought it'd be useful to have a new one!

(Malcolm notes: it seems particularly likely that this time there are likely to be some noteworthy individually-invented techniques this time, as people seem to be doing a lot of that sort of thing these days!)

A lightly modified version of the instructions from the previous post:

  1. Note what technique you've tried. Techniques can be anything from productivity systems (Getting Things Done, Complice) to social incentives (precommitting in front of friends) to websites or computer programs (Beeminder, Leechblock) to chemical aids (Modafinil, Caffeine). If it's something that you can easily link to information about, please provide a link and I'll add it when I list the technique; if you don't have a link, describe it in your comment and I'll link that. It could also be a cognitive technique you developed or copied from a friend, which might not have a clear name but you can give it one if you like!
  2. Give your experience with it a score from -10 to +10 (0 if it didn't change the status quo, 10 if it ended your akrasia problems forever with no unwanted side effects, negative scores if it actually made your life worse, -10 if it nearly killed you). For simplicity's sake, I'll only include reviews that give numerical scores.
  3. Describe your experience with it, including any significant side effects. Please also say approximately how long you've been using it, or if you don't use it anymore how long you used it before giving up.

Every so often, I'll combine all the data back into the main post, listing every technique that's been reviewed at least twice with the number of reviews, average score, standard deviation and common effects. I'll do my best to combine similar techniques appropriately, but it'd be appreciated if you could try to organize it a bit by replying to people doing similar things and/or saying if you feel your technique is (dis)similar to another.

I'm not going to provide an initial list due to the massive number of possible techniques and concern of prejudicing answers, but you can look back on the list in the last post or the previous one one if you want. If you have any suggestions for how to organize this (that wouldn't require huge amounts of extra effort on my part), I'm open to hearing them.

Thanks for your data!

(There's a meta thread here for comments that aren't answers to the main prompt.)

Brief update on the consequences of my "Two arguments for not thinking about ethics" (2014) article

14 Kaj_Sotala 05 April 2017 11:25AM

In March 2014, I posted on LessWrong an article called "Two arguments for not thinking about ethics (too much)", which started out with:

I used to spend a lot of time thinking about formal ethics, trying to figure out whether I was leaning more towards positive or negative utilitarianism, about the best courses of action in light of the ethical theories that I currently considered the most correct, and so on. From the discussions that I've seen on this site, I expect that a lot of others have been doing the same, or at least something similar.

I now think that doing this has been more harmful than it has been useful, for two reasons: there's no strong evidence to assume that this will give us very good insight to our preferred ethical theories, and more importantly, because thinking in those terms will easily lead to akrasia.

I ended the article with the following paragraph:

My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations, "shoulds" that are unlikely to get done; whereas thinking about e.g. the feelings of empathy that motivated me to become utilitarian in the first place produce motivations that feel like internal "wants". I was very close to (yet another) burnout and serious depression some weeks back: a large part of what allowed me to avoid it was that I stopped entirely asking the question of what I should do, and began to focus entirely on what I want to do, including the question of which of my currently existing wants are ones that I'd wish to cultivate further. (Of course there are some things like doing my tax returns that I do have to do despite not wanting to, but that's a question of necessity, not ethics.) It's way too short of a time to say whether this actually leads to increased productivity in the long term, but at least it feels great for my mental health, at least for the time being.

The long-term update (three years after first posting the article) is that starting to shift my thought patterns in this way was totally the right thing to do, and necessary for starting a long and slow recovery from depression. It's hard to say entirely for sure how big of a role this has played, since the patterns of should-thought were very deeply ingrained and have been slow to get rid of; I still occasionally find myself engaging in them. And there have been many other factors also affecting my recovery during this period, so only a part of the recovery can be attributed to the "utilitarianism-excising" with any certainty. Yet, whenever I've found myself engaging in such patterns of thought and managed to eliminate them, I have felt much better as a result. I do still remember a time when a large part of my waking-time was driven by utilitarian thinking, and it's impossible for me to properly describe how relieved I now feel for the fact that my mind feels much more peaceful now.

The other obvious question besides "do I feel better now" is "do I actually get more good things done now"; and I think that the answer is yes there as well. So I don't just feel generally better, I think my actions and motivations are actually more aligned with doing good than they were when I was trying to more explicitly optimize for following utilitarianism and doing good in that way. I still don't feel like I actually get a lot of good done, but I attribute much of this to still not having entirely recovered; I also still don't get a lot done that pertains to my own personal well-being. (I just spent several months basically doing nothing, because this was pretty much the first time when I had the opportunity, finance-wise, to actually take a long stressfree break from everything. It's been amazing, but even after such an extended break, the burnout symptoms still pop up if I'm not careful.)

Instrumental Rationality 1: Starting Advice

13 lifelonglearner 18 June 2017 06:43PM

Starting Advice

[This is the first post in the Instrumental Rationality Sequence. It's a collection of four concepts that I think are central to instrumental rationality—caring about the obvious, looking for practical things, practicing in pieces, and realistic expectations.

Note that these essays are derivative of things I've written here before, so there may not be much new content in this post. (But I wanted to get something out as it'd been about a month since my last update.)

My main goal with this collection was to polish / crystallize past points I've made. If things here are worded poorly, unclear, or don't seem useful, I'd really appreciate feedback to try and improve.]

 


In Defense of the Obvious:

[As advertised.]

A lot of the things I’m going to go over in this sequence are sometimes going to sound obvious, boring, redundant, or downright tautological. This essay is here to convince you that you should try to listen to the advice anyway, even if it sounds stupidly obvious.

First off, our brains don’t always see all the connections at once. Thus, even if some given advice is apparentlyobvious, you still might be learning things.

For example, say someone tells you, “If you want to exercise more, then you should probably exercise more. Once you do that, you’ll become the type of person who exercises more, and then you’ll likely exercise more.”

The above advice might sound pretty silly, but it may still be useful. Often, our mental categories for “exercise” and “personal identity” are in different places. Sure, it’s tautologically true that someone who exercises becomes a person who exercises more. But if you’re not explicitly thinking about how your actions change who you are, then there’s likely still something new to think about.

Humans are often weirdly inconsistent with our mental buckets—things that logically seem like they “should” be lumped together often aren't. By paying attention to even tautological advice like this, you’re able to form new connections in your brain and link new mental categories together, perhaps discovering new insights that you “already knew”.

Secondly, obvious advice tends to be low-hanging fruit. If your brain is pattern-matching something as “boring advice” or “obvious”, you’ve likely heard it before many times before.

For example, you can probably guess the top 5 things on any “How to be Productive” list—make a schedule, remove distractions, take periodic breaks, etc. etc. You can almost feel your brain roll its metaphorical eyes at such dreary, well-worn advice.

But if you’ve heard these things repeated many times before, this is also good reason to suspect that, at least for a lot of people, it works. Meaning that if you aren’t taking such advice already, you can probably get a boost by doing so.

If you just did those top 5 things, you’d probably already be quite the productive person.

The trick, then, is to actually do them. That means doing the obvious thing.

 

Lastly, it can be easy to discount obvious advice when you’ve seen too much of it. When you’re bombarded with boring-seeming advice from all angles, it’s easy to become desensitized.

What I mean is that it’s possible to dismiss obvious advice outright because it sounds way too simple. “This can’t possibly work,” your brain might say, “The secret to getting things done must be more complex!”

There’s something akin to the hedonic treadmill happening here where, after having been exposed to all the “normal” advice, you start to seek out deeper and deeper ideas in search of some sort of mental high. What happens is that you become a kind of self-help junkie.

You can end up craving the bleeding edge of crazy ideas because literally nothing else seems worthwhile. You might end up dismissing normal helpful ideas simply because they’re not paradigm-crushing, mind-blowing, or mentally stimulating enough.

At which point, you’ve adopted quite the contrarian stance—you reject the typical idea of advice on grounds of its obviousness alone.

If this describes, might I tempt you with the meta-contrarian point of view?

Here’s the sell: One of the secrets to winning at life is looking at obvious advice, acknowledging that it’s obvious, and then doing it anyway.

(That’s right, you can join the elite group of people who scoff at those who scoff at the obvious!)

You can both say, “Hey, this is pretty simple stuff I’ve heard a thousand times before,” as well as say, “Hey, this is pretty useful stuff I should shut up and do anyway even if it sounds simple because I’m smart and I recognize the value here.”

At some point, being more sophisticated than the sophisticates means being able the grasp the idea that not all things have to be hyper complex. Oftentimes, the trick to getting something done is simply to get started and start doing it.

Because some things in life really are obvious.

Hunting for Practicality:

[This is about looking for ways to have any advice you read be actually useful, by having it apply to the real world. ]

Imagine someone trying to explain exactly what the mitochondria does in the cell, and contrast that to someone trying to score a point in a game of basketball.

There’s something clearly different about what each person is trying to do, even if we lumped both under the label of “learning” (one is learning about cells and the other is learning about basketball).

In learning, it turns out this  divide is often separated into declarative and procedural knowledge.

Declarative knowledge is like the student trying to puzzle out the ATP question; it’s about what you know.

In contrast, procedural knowledge, like the fledgling basketball player, is about what you do.

I bring up this divide because many of the techniques in instrumental rationality will feel like declarative knowledge, but they’ll really be procedural in nature.

For example, say you’re reading something on motivation, and you learn that “Motivation = Energy to do the thing + a Reminder to do the thing + Time to do the thing = E+R+T”.

What’ll likely happen is that your brain will form a new set of mental nodes that connects “motivation” to “E+R+T”. This would be great if I ended up quizzing you “What does motivation equal?” whereupon you’d correctly answer “E+R+T”.

But that’s not the point here! The point is to have the equation actually cash out into the real world and positively affect your actions. If information isn’t changing you view or act, then you’re probably not extracting all the value you can.

What that means is figuring out the answer to this question: "How do I see myself acting differently in the future as a result of this question?"

With that in mind, say you generate some examples and make a list.

Your list of real-world actions might end up looking like:

1) Remembering to stay hydrated more often (Energy)

2) Using more Post-It notes as memos (Reminder)

3) Start using Google Calendar to block out chunks of time (Time).

The point is to be always on the lookout for ways to see how you can use what you’re learning to inform your actions. Learning about all these things is only useful if you can find ways to apply them. You want to do more than have empty boxes that link concepts together. It’s important to have those boxes linked up to ways you can do better in the real world.

You want to actually put in some effort trying to answer question of practicality.

 

 

 


Actually Practicing:

[This is about knowing the nuances of little steps behind any sort of self-improvement skill you learn, and how those little steps are important when learning the whole.]

So on one level, using knowledge from instrumental rationality is about how you take declarative-seeming information and find ways to actually get real-world actions out of it. That’s important.

But it’s also important to note that the very skill of “Generating Examples”—the thing you did in the above essay to even figure out which actions can fit in the above equation to fill in the blanks of E, R, and T—is itself a mental habit that requires procedural knowledge.

What I mean is that there’s a subtler thing that’s happening inside your head when you try to come up with examples—your brain is doing something—and this “something” is important.  

It’s important, I claim, because if we peer a little more deeply at what it means for your brain to generate examples, we’ll come away with a list of steps that will feel a lot like something a brain can do, a prime example of procedural knowledge.

For example, we can imagine a magician trying to learn a card trick. They go through the steps. First they need to spread the cards. Then comes the secret move. Finally comes the final reveal of the selected card in the magician’s pocket.

What the audience member sees is the full finished product. And indeed, the magician who’s practiced enough will also see the same thing. But it’s not until the magician goes through all the steps and understands how all the steps flow together to form the whole card trick that they’re ready to perform.

The idea here is to describe any mental skill with enough granularity and detail, at the 5 second level, such that you’d both be able to go through the same steps a second time and teach someone else. So being able to take skills and chunk them into smaller pieces is also forms another core part of learning.

 


Realistic Expectations:

[An essay about having realistic expectations and looking past potentially harmful framing effects.]

There’s this tendency to get frustrated with learning mental techniques after just a few days. I think this is because people miss the declarative vs procedural distinction. (But you hopefully won’t fall prey to it because we’ve covered the distinction now!)

Once we liken the analogy to be more like that playing a sport, it becomes much easier to see that any expectation of immediately learning a mental habit is rather silly—no one expects to master tennis in just a week.

So, when it comes to trying to configure your expectations, I suggest that you try to renormalize your expectations by treating learning mental habits more like learning a sport.

Keep that as an analogy, and you’ll likely get fairly well-calibrated expectations for learning all this stuff.

Still, what, then, might be a realistic time frame for learning?

We’ll go over habits in far more detail in a later section, but a rough number for now is approximately two months. You can expect that, on average, it’ll take you about 66 days to ingrain a new habit.

Similarly, instrumental rationality (probably) won’t make you a god. In my experience, studying these areas has been super useful, which is why I’m writing at all. But I would guess that, optimistically, I only about doubled my work output.

Of course your own mileage may vary depending where you are right now, but this serves as the general disclaimer to keep your expectations within the bound of reality.

Here, the main point is that, even though mental habits don’t seem like they should be more similar to playing a sport, they really are. There’s something here about how first impressions can be rather deceiving.

For example, a typical trap I might fall into is missing the distinction between “theoretically possible” and “realistic”. I end up looking at the supposed 24 hours available to me everyday and then beating myself up for not being able to harness all 24 hours to do productive work.

But such a framing of the situation is inaccurate; things like sleep and eating are often very essential to maximizing productivity for the rest of the hours! So when diving in and practicing, try to look a little deeper when setting your expectations.

 


The Rationalistsphere and the Less Wrong wiki

13 Deku-shrub 12 June 2017 11:29PM

Hi everyone!

For people not acquainted with me, I'm Deku-shrub, often known online for my cybercrime research, as well as fairly heavy involvement in the global transhumanist movement with projects like the UK Transhumanist Party and the H+Pedia wiki.

For almost 2 years year now on and off I have been trying to grok what Less Wrong is about, but I've shirked reading all the sequences end to end, instead focused on the most popular ideas transmitted by Internet cultural osmosis. I'm an amateur sociologist and understanding Less Wrong falls within my wider project of understanding the different trends within the contemporary and historical transhumanist movement.

I'm very keen to pin down today's shape of the rationalistsphere and its critics, and the best place I have found do this is on the wiki. Utilising Cunningham's Law at times, I've been building some key navigational and primer articles on the wiki. However with the very lowest hanging fruit now addressed I ask - what next for the wiki?

Distillation of Less Wrong

There was a historical attempt to summerise all major Less Wrong posts, an interesting but incomplete project. It was also approach without a usefully normalised approach. Ideally, every article would have its own page which could be heavily tagged up with metadata such a themes, importance, length, quality, author and such. Is this the goal of the wiki?

Outreach and communications

Another major project is to fully index the Diaspora across Twitter, Blogs, Tumblr, Reddit, Facebook etc and improve the flow of information between the relevant sub communities.

You'll probably want to join one of the chat platforms if you're interested in getting involved. Hell, there are even a few memes and probably more to collect.

Rationalist research

I'll admit I'm ignorant of the goal of Arbital, but I do love me a wiki for research. Cross referencing and citing ideas, merging, splitting, identifying and fully capturing truly interesting and useful ideas from fanciful and fleeting ones is how I've become an expert in a number fields, just by being the first to assemble All The Things.

Certain ideas like the Paper clip maximizer have some popularity beyond just Less Wrong, but Murder Gandhi doesn't - yet. Polishing these ideas with existing and external references (and maybe blogging about them?) is a great way for the community discussion of yore to make its way into the publications of lazy journalists for dissemination. Hell, RationalWiki has been doing it for years now, they're not the only game in town.

 

If you have any ideas in these areas, or others just a technical, let me know either here, on the Less Wrong Slack group, or on my talk page and maybe we can make Wikis Great Again? ;)

We are the Athenians, not the Spartans

13 wubbles 11 June 2017 05:53AM

The Peloponnesian War was a war between two empires: the seadwelling Athenians, and the landlubber Spartans. Spartans were devoted to duty and country, living in barracks and drinking the black broth. From birth they trained to be the caste dictators of a slaveowning society, which would annually slay slaves to forestall a rebellion. The most famous Spartan is Leonidas, who died in a heroic last stand delaying the invasion of the Persians. To be a Spartan was to live a life devoted to toughness and duty.

Famous Athenians are Herodotus, inventor of history, Thucydides, Socrates, Plato, Hippocrates of the oath medical students still take, all the Greek playwrights, etc.  Attic Greek is the Greek we learn in our Classics courses. Athens was a city where the students of the entire known Greek world would come to learn from the masters, a maritime empire with hundreds of resident aliens, where slavery was comparable to that of the Romans. Luxury apartments, planned subdivisions, sexual hedonism, and free trade made up the life of the Athenian elite.

These two cities had deeply incompatible values. Spartans lived in fear that the Helots would rebel and kill them. Deeply suspicious of strangers, they imposed oligarchies upon the cities they conquered. They were described by themselves and others cautious and slow to act. Athenians by contrast prized speed and risk in their enterprises. Foreigners could live freely in Athens and even established their own temples. Master and slave comedies of Athens inspired PG Woodhouse.

All intellectual communities are Athenian in outlook. We remember Sparta for its killing and Athens for its art. If we want the rationalist community to tackle the hard problems, if we support a world that is supportive of human values and beauty, if we yearn to end the plagues of humanity, our values should be Athenian: individualistic, open, trusting, enamoured of beauty. When we build social technology, it should not aim to cultivate values that stand against these.

High trust, open, societies are the societies where human lives are most improved. Beyond merely being refugees for the persecuted they become havens for intellectual discussion and the improvement of human knowledge and practice. It is not a coincidence that one city produced Spinoza, Rubens, Rembrandt, van Dyke, Huygens, van Leeuwenhoek, and Grotius in a few short decades, while dominating the seas and being open to refugees.

Sadly we seem to have lost sight of this in the rationality community. Increasingly we are losing touch as a community with the outside intellectual world, without the impetus to study what has been done before and what the research lines are in statistics, ML, AI, epistemology, biology, etc. While we express that these things are important, the conversation doesn't seem to center around the actual content of these developments. In some cases (statistics) we're actively hostile to understanding some of the developments and limitations of our approach as a matter of tribal marker.

Some projects seem to me to be likely to worsen this, either because they express Spartan values or because they further physical isolation in ways that will act to create more small-group identification.

What can we do about this? Holiday modifications might help with reminding us of our values, but I don't know how we can change the community's outlook more directly. We should strive to stop merely acting on the meta-level and try to act on the object level more as a community. And lastly, we should notice that our values are real and not universal, and that they need defending.

A new, better way to read the Sequences

13 SaidAchmiz 04 June 2017 05:10AM

A new way to read the Sequences:

https://www.readthesequences.com

It's also more mobile-friendly than a PDF/mobi/epub.

(The content is from the book — Rationality: From AI to Zombies. Books I through IV are up already; Books V and VI aren't up yet, but soon will be.)

Edit: Book V is now up.

Edit 2: Book VI is now up.

Edit 3: A zipped archive of the site (for offline viewing) is now available for download.

Bad intent is a disposition, not a feeling

13 Benquo 01 May 2017 01:28AM

It’s common to think that someone else is arguing in bad faith. In a recent blog post, Nate Soares claims that this intuition is both wrong and harmful:

I believe that the ability to expect that conversation partners are well-intentioned by default is a public good. An extremely valuable public good. When criticism turns to attacking the intentions of others, I perceive that to be burning the commons. Communities often have to deal with actors that in fact have ill intentions, and in that case it's often worth the damage to prevent an even greater exploitation by malicious actors. But damage is damage in either case, and I suspect that young communities are prone to destroying this particular commons based on false premises.

To be clear, I am not claiming that well-intentioned actions tend to have good consequences. The road to hell is paved with good intentions. Whether or not someone's actions have good consequences is an entirely separate issue. I am only claiming that, in the particular case of small high-trust communities, I believe almost everyone is almost always attempting to do good by their own lights. I believe that propagating doubt about that fact is nearly always a bad idea.

It would be surprising, if bad intent were so rare in the relevant sense, that people would be so quick to jump to the conclusion that it is present. Why would that be adaptive?

What reason do we have to believe that we’re systematically overestimating this? If we’re systematically overestimating it, why should we believe that it’s adaptive to suppress this?

There are plenty of reasons why we might make systematic errors on things that are too infrequent or too inconsequential to yield a lot of relevant-feeling training data or matter much for reproductive fitness, but social intuitions are a central case of the sort of things I would expect humans to get right by default. I think the burden of evidence is on the side disagreeing with the intuitions behind this extremely common defensive response, to explain what bad actors are, why we are on such a hair-trigger against them, and why we should relax this.

Nate continues:

My models of human psychology allow for people to possess good intentions while executing adaptations that increase their status, influence, or popularity. My models also don’t deem people poor allies merely on account of their having instinctual motivations to achieve status, power, or prestige, any more than I deem people poor allies if they care about things like money, art, or good food. […]

One more clarification: some of my friends have insinuated (but not said outright as far as I know) that the execution of actions with bad consequences is just as bad as having ill intentions, and we should treat the two similarly. I think this is very wrong: eroding trust in the judgement or discernment of an individual is very different from eroding trust in whether or not they are pursuing the common good.

Nate's argument is almost entirely about mens rea - about subjective intent to make something bad happen. But mens rea is not really a thing. He contrasts this with actions that have bad consequences, which are common. But there’s something in the middle: following an incentive gradient that rewards distortions. For instance, if you rigorously A/B test your marketing until it generates the presentation that attracts the most customers, and don’t bother to inspect why they respond positively to the result, then you’re simply saying whatever words get you the most customers, regardless of whether they’re true. In such cases, whether or not you ever formed a conscious intent to mislead, your strategy is to tell whichever lie is most convenient; there was nothing in your optimization target that forced your words to be true ones, and most possible claims are false, so you ended up making false claims.

More generally, if you try to control others’ actions, and don’t limit yourself to doing that by honestly informing them, then you’ll end up with a strategy that distorts the truth, whether or not you meant to. The default state for any given constraint is that it has not been applied to someone's behavior. To say that someone has the honest intent to inform is a positive claim about their intent. It's clear to me that we should expect this to sometimes be the case - sometimes people perceive a convergent incentive to inform one another, rather than a divergent incentive to grab control. But, if you do not defend yourself and your community against divergent strategies unless there is unambiguous evidence, then you make yourself vulnerable to those strategies, and should expect to get more of them.The default hypothesis should be that any given constraint has not been applied to someone's behavior. To say that someone has the honest intent to inform is a positive claim about their intent. It's clear to me that we should expect this to sometimes be the case - sometimes people have a convergent incentive to inform one another, rather than a divergent incentive to grab control. 

I’ve been criticizing EA organizations a lot for deceptive or otherwise distortionary practices (see here and here), and one response I often get is, in effect, “How can you say that? After all, I've personally assured you that my organization never had a secret meeting in which we overtly resolved to lie to people!”

Aside from the obvious problems with assuring someone that you're telling the truth, this is generally something of a nonsequitur. Your public communication strategy can be publicly observed. If it tends to create distortions, then I can reasonable infer that you’re following some sort of incentive gradient that rewards some kinds of distortions. I don’t need to know about your subjective experiences to draw this conclusion. I don’t need to know your inner narrative. I can just look, as a member of the public, and report what I see.

Acting in bad faith doesn’t make you intrinsically a bad person, because there’s no such thing. And besides, it wouldn't be so common if it required an exceptionally bad character. But it has to be OK to point out when people are not just mistaken, but following patterns of behavior that are systematically distorting the discourse - and to point this out publicly so that we can learn to do better, together.

(Cross-posted at my personal blog.)

[EDITED 1 May 2017 - changed wording of title from "behavior" to "disposition"]

[Link] "On the Impossibility of Supersized Machines"

13 crmflynn 31 March 2017 11:32PM

SlateStarCodex Meetups Everywhere: Analysis

12 mingyuan 13 May 2017 12:29AM

The first round of SlateStarCodex meetups took place from April 4th through May 20th, 2017 in 65 cities, in 16 countries around the world. Of the 69 cities originally listed as having 10 or more people interested, 9 did not hold meetups, and 5 cities that were not on the original list did hold meetups.

We collected information from 43 of these events. Since we are missing data for 1/3 of the cities, there is probably some selection bias in the statistics; I would speculate that we are less likely to have data from less successful meetups.

Of the 43 cities, 25 have at least tentative plans for future meetups. Information about these events will be posted at the SSC Meetups GitHub.

 

Turnout

Attendance ranged from 3 to approximately 50 people, with a mean of 16.7. Turnout averaged about 50% of those who expressed interest on the survey (range: 12% to 100%), twice what Scott expected. This average does not appear to have been skewed by high turnout at a few events – mean: 48%, median: 45%, mode: 53%.

On average, gender ratio seemed to be roughly representative of SSC readership overall, ranging from 78% to 100% male (for the 5 meetups that provided gender data). The majority of attendees were approximately 20-35 years old, consistent with the survey mean age of 30.6.

 

Existing vs new meetups

Approximately one fifth of the SSC meetups were hosted by existing rationality or LessWrong groups. Some of these got up to 20 new attendees from the SSC announcement, while others saw no new faces at all. The two established meetups that included data about follow-up meetings reported that retention rates for new members were very low, at best 17% for the next meeting.

Here, it seems important to make a distinction between the needs of SSC meetups specifically and rationality meetups more generally. On the 2017 survey, 50% of readers explicitly did not identify with LW and 54% explicitly did not identify with EA. In addition, one organizer expressed the concern that, “Going forward, I think there is a concern of “rationalists” with a shared background outnumbering the non-lesswrong group, and dominating the SSC conversation, making new SSC fans less likely to engage.”

This raises the question of whether SSC groups should try to exist separately from local EA/LW/rationalist/skeptic groups – this is of particular concern in locations where the community is small and it’s difficult for any of these groups to function on their own due to low membership.

Along the same lines, one organizer wondered how often it made sense to hold events, since “If meetups happen very frequently, they will be attended mostly by hardcore fans (and a certain type of person), while if they are scheduled less frequently, they are likely to be attended by a larger, more diverse group. My fear is the hardcore fans who go bi-weekly will build a shared community that is less welcoming/appealing to outsiders/less involved people, and these people will be less willing to get involved going forward.”

Suggestions on how to address these concerns are welcome.

 

Advice for initial meetings

Bring name tags, and collect everyone’s email addresses. It’s best to do this on a computer or tablet, since some people have illegible handwriting, and you don’t want their orthographic deficiencies to mean you lose contact with them forever.

Don’t try to impose too much structure on the initial meeting, since people will mostly just want to get to know each other and talk about shared interests. If possible, it’s also good to not have a hard time limit - meetups in this round lasted between 1.5 and 6 hours, and you don’t want to have to make people leave before they’re ready. However, both structure and time limits are things you will most likely want if you have regularly recurring meetups.

 

Content

Most meetups consisted of unstructured discussion in smallish groups (~7 people). At least one organizer had people pair up and ask each other scripted questions, while another used lightning talks as an ice-breaker. Other activities included origami, Rationality Cardinality, and playing with magnadoodles and diffraction glasses, but mostly people just wanted to talk.

Topics, predictably, mostly centered around shared interests, and included: SSC and other rationalist blogs, rationalist fiction, the rationality community, AI, existential risk, politics and meta-politics, book recommendations, and programming (according to the survey, 30% of readers are programmers), as well as normal small talk and getting-to-know-each-other topics.

Common ice-breakers included first SSC post read, how people found SSC, favorite SSC post, and SSC vs LessWrong (aka, is Eliezer or Scott the rightful caliph).

Though a few meetups had a little difficulty getting conversation started and relied on ice-breakers and other predetermined topics, no organizer reported prolonged awkwardness; people had a lot to talk about and conversation flowed quite easily for the most part.

One area where several organizers encountered difficulties was large discrepancies in knowledge of rationalist-sphere topics among attendees, since some people had only recently discovered SSC or were even non-readers brought along by friends, while many others were long-time members of the community. Suggestions for quickly and painlessly bridging inferential gaps on central concepts in the community would be appreciated.

 

Locations 

Meetups occurred in diverse locations, including restaurants, cafés, pubs/bars, private residences, parks, and meeting rooms in coworking spaces or on university campuses.

Considerations for choosing a venue:

  • Capacity – Some meetups found that their original venues couldn’t accommodate the number of people who attended. This happened at a private residence and at a restaurant. Be flexible about moving locations if necessary.
  • Arrangement – For social meetups, you will probably want a more flexible format. For this purpose, it’s best to have the run of the space, which you have in private residences, parks, meeting rooms, and bars and restaurants if you reserve a whole room or floor.
  • Noise – Since the main activity is talking, this is an important consideration. An ideal venue is quiet enough that you can all hear each other, but (if public) not so quiet that you will be disrupting others with your conversation.
  • Visibility – If meeting in a public place, have a somewhat large sign that says ‘SSC’ on it, placed somewhere easily visible. If the location is large or hard to find, consider including your specific location (e.g. ‘we’re at the big table in the northwest corner’) or GPS coordinates in the meetup information.
  • Permission – Check with the manager first if you plan to hold a large meetup in a private building, such as a mall, market, or café. Also consider whether you’ll be disturbing other patrons.
  • Time restrictions – If you are reserving a space, or if you are meeting somewhere that has a closing time, be aware that people may want to continue their discussions for longer than the space is available. Have a contingency plan for this, a second location to move to in case you run overtime.
  • Availability of food – Some meetups lasted as long as six hours, so it’s good to either bring food, meet somewhere with easy access to food, or be prepared to go to a restaurant.
  • Privacy – People at some meetups were understandably hesitant to have controversial / culture war discussions in public. If you anticipate this being a problem, you should try to find a more private venue, or a more secluded area.

Conclusion

Overall most meetups went smoothly, and many had unexpectedly high turnout. Almost every single organizer, even for the tiny meetups, reported that attendees showed interest in future meetings, but few had concrete plans.

These events have been an important first step, but it remains to be seen whether they will lead to lasting local communities. The answer is largely up to you.

If you attended a meetup, seek out the people you had a good time talking to, and make sure you don’t lose contact with them. If you want there to be more events, just set a time and place and tell people. You can share details on local Facebook groups, Google groups, and email lists, and on LessWrong and the SSC meetups repository. If you feel nervous about organizing a meetup, don’t worry, there are plenty of resources just for that. And if you think you couldn’t possibly be an organizer because you’re somehow ‘not qualified’ or something, well, I once felt that way too. In Scott’s words, “it would be dumb if nobody got to go to meetups because everyone felt too awkward and low-status to volunteer.”

Finally, we’d like to thank Scott for making all of this possible. One of the most difficult things about organizing meetups is that it’s hard to know where to look for members, even if you know there must be dozens of interested people in your area. This was an invaluable opportunity to overcome that initial hurdle, and we hope that you all make the most of it.

 

Thanks to deluks917 for providing feedback on drafts of this report, and for having the idea to collect data in the first place :)

That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox

12 Stuart_Armstrong 11 May 2017 09:16AM

That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox

Anders Sandberg, Stuart Armstrong, Milan M. Cirkovic

If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 1030 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

As far as I can tell, the paper's physics is correct (most of the energy comes not from burning stars but from the universe's mass).

However, the conclusions are likely wrong, because it's rational for "sleeping" civilizations to still want to round up stars that might be ejected from galaxies, collect cosmic dust, and so on.

The paper is still worth publishing, though, because there may other, more plausible ideas in the vicinity of this one. And it describes how future civilization may choose to use their energy.

Use and misuse of models: case study

12 Stuart_Armstrong 27 April 2017 02:36PM

Some time ago, I discovered a post comparing basic income and basic job ideas. This sought to analyse the costs of paying everyone a guaranteed income versus providing them with a basic job with that income. The author spelt out his assumptions and put together a two models with a few components (including some whose values were drawn from various probability distributions). Then he ran a Monte Carlo simulation to get a distribution of costs for either policy.

Normally I should be very much in favour of this approach. It spells out the assumptions, it uses models, it decomposes the problem, it has stochastic uncertainty... Everything seems ideal. To top it off, the author concluded with a challenge aiming at improving reasoning around this subject:

How to Disagree: Write Some Code

This is a common theme in my writing. If you are reading my blog you are likely to be a coder. So shut the fuck up and write some fucking code. (Of course, once the code is written, please post it in the comments or on github.)

I've laid out my reasoning in clear, straightforward, and executable form. Here it is again. My conclusions are simply the logical result of my assumptions plus basic math - if I'm wrong, either Python is computing the wrong answer, I got really unlucky in all 32,768 simulation runs, or you one of my assumptions is wrong.

My assumption being wrong is the most likely possibility. Luckily, this is a problem that is solvable via code.

And yet... I found something very unsatisfying. And it took me some time to figure out why. It's not that these models are helpful, or that they're misleading. It's that they're both simultaneously.

To explain, consider the result of the Monte Carlo simulations. Here are the outputs (I added the red lines; we'll get to them soon):

The author concluded from these outputs that a basic job was much more efficient - less costly - than a basic income (roughly 1 trillion cost versus 3.4 trillion US dollars). He changed a few assumptions to test whether the result held up:

For example, maybe I'm overestimating the work disincentive for Basic Income and grossly underestimating the administrative overhead of the Basic Job. Lets assume both of these are true. Then what?

The author then found similar results, with some slight shifting of the probability masses.

 

The problem: what really determined the result

So what's wrong with this approach? It turns out that most of the variables in the models have little explanatory power. For the top red line, I just multiplied the US population by the basic income. The curve is slightly above it, because it includes such things as administrative costs. The basic job situation was slightly more complicated, as it includes a disabled population that gets the basic income without working, and a estimate for the added value that the jobs would provide. So the bottom red line is (disabled population)x(basic income) + (unemployed population)x(basic income) - (unemployed population)x(median added value of jobs). The distribution is wider than for basic income, as the added value of the jobs is a stochastic variable.

But, anyway, the contribution of the other variables were very minor. So the reduced cost of basic jobs versus basic income is essentially a consequence of the trivial fact that it's more expensive to pay everyone an income, than to only pay some people and then put them to work at something of non-zero value.

 

Trees and forests

So were the complicated extra variables and Monte Carlo runs for nothing? Not completely - they showed that the extra variables were indeed of little importance, and unlikely to change the results much. But nevertheless, the whole approach has one big, glaring flaw: it does not account for the extra value for individuals of having a basic income versus a basic job.

And the challenge - "write some fucking code" - obscures this. The forest of extra variables and the thousands of runs hides the fact that there is a fundamental assumption missing. And pointing this out is enough to change the result, without even needing to write code. Note this doesn't mean the result is wrong: some might even argue that people are better off with a job than with the income (builds pride in one's work, etc...). But that needs to be addressed.

So Chris Stucchio's careful work does show one result - most reasonable assumptions do not change the fact that basic income is more expensive than basic job. And to disagree with that, you do indeed need to write some fucking code. But the stronger result - that basic job is better than basic income - is not established by this post. A model can be well designed, thorough, filled with good uncertainties, and still miss the mark. You don't always have to enter into the weeds of the model's assumptions in order to criticise it.

[Link] Putanumonit: What statistical power means, and why I'm terrified about psychology

11 Jacobian 21 June 2017 06:29PM

Bi-Weekly Rational Feed

11 deluks917 10 June 2017 09:56PM

===Highly Recommended Articles:

Bring Up Genius by Viliam (lesswrong) - An "80/20" translation. Positive motivation. Extreme resistance from the Hungarian government and press. Polgar's five principles. Biting criticism of the school system. Learning in early childhood. Is Genius a gift or curse? Celebrity. Detailed plan for daily instruction. Importance of diversity. Why chess? Teach the chess with love, playfully. Emancipation of women. Polgar's happy family.

The Shouting Class by Noah Smith - The majority of comments come from a tiny minority of commentators. Social media is giving a bullhorn to the people who constantly complain. Negativity is contagious. The level of discord in society is getting genuinely dangerous. The French Revolution. The author criticizes shouters on the Left and Right.

How Givewell Uses Cost Effectiveness Analyses by The GiveWell Blog - GiveWell doesn't take its estimates literally, unless one charity is measured as 2-3x as cost-effective GiveWell is unsure if a difference exists. Cost-effective is however the most important factor in GiveWell's recommendations. GiveWell goes into detail about how it deals with great uncertainty and suboptimal data.

Mode Collapse And The Norm One Principle by tristanm (lesswrong) - Generative Adversarial Networks. Applying the lessons of Machine Learning to discourse. How to make progress when the critical side of discourse is very powerful. "My claim is that any contribution to a discussion should satisfy the "Norm One Principle." In other words, it should have a well-defined direction, and the quantity of change should be feasible to implement."

The Face Of The Ice by Sarah Constantin (Otium) - Mountaineering. Survival Mindset vs Sexual-Selection Mindset. War and the Wilderness. Technical Skill.

Bayes: A Kinda Sorta Masterpost by Nostalgebraist - A long and very well thought-out criticism of Bayesianism. Explanation of Bayesian methodology. Comparison with classical statistics. Arguments for Bayes. The problem of ignored hypotheses with known relations. The problem of new ideas. Where do priors come from? Regularization and insights from machine learning.

===Scott:

SSC Journal Club Ai Timelines by Scott Alexander - A new paper surveying what Ai experts think about Ai progress. Contradictory results about when Ai will surpass humans at all tasks. Opinions on Ai risk, experts are taking the arguments seriously.

Terrorism and Involuntary Commitment by Scott Alexander (Scratchpad) - The leader of the terrorist attack in London was in a documentary about jihadists living in Britain. “Being the sort of person who seems likely to commit a crime isn’t illegal.” Involuntary commitment.

Is Pharma Research Worse Than Chance by Scott Alexander - The most promising drugs of the 21st century are MDMA and ketamine (third is psilocybin). These drugs were all found by the drug community. Maybe pharma should look for compounds with large effect sizes instead of searching for drugs with no side-effects.

Open Thread 77- Opium Thread by Scott Alexander - Bi-weekly open thread. Includes some comments of the week and an update on translating "Bringing Up Genius".

Third and Fourth Thoughts on Dragon Army by SlateStarScratchpad. - Scott goes from Anti-Anti-Dragon-Army to Anti-Dragon-Army. He then gets an email from Duncan and updates in favor of the position that Duncan thought things out well.

Hungarian Education III Mastering The Core Teachings Of The Budapestians by Scott Alexander - Lazlo Polgar wanted to prove he could intentionally raise chess geniuses. He raised the number 1,2 and 6 female chess players in the world?

Four Nobel Truths by Scott Alexander - Four Graphs describing facts about Israeli/Askenazi Nobel Prizes.

===Rationalist:

The Precept Of Niceness by H i v e w i r e d - Prisoner's Dilemma's. Even against a truly alien opponent you should still cooperate as long as possible on the iterated prisoner's dilemma, even with fixed round lengths, play tit-for-tat. Niceness is the best strategy.

Epistemology Vs Critical Thinking by Onemorenickname (lesswrong) - Epistemies work. General approaches don't work. Scientific approaches work. Epistemic effort vs Epistemic status. Criticisms of lesswrong Bayesianism.

Tasting Godhood by Agenty Duck - Poetic and personal. Wine tasting. Empathizing with other people. Seeing others as whole people. How to dream about other people. Sci-fi futures. Tasting godhood is the same as tasting other people. Looking for your own godhood.

Bayes: A Kinda Sorta Masterpost by Nostalgebraist - A long and very well thought-out criticism of Bayesianism. Explanation of Bayesian methodology. Comparison with classical statistics. Arguments for Bayes. The problem of ignored hypotheses with known relations. The problem of new ideas. Where do priors come from? Regularization and insights from machine learning.

Dichotomies by mindlevelup - 6 short essays about dichotomies and whats useful about noticing them. Fast vs Slow thinking. Focused vs Diffuse Mode. Clean vs Dirty Thinking. Inside vs Outside View. Object vs Meta level. Generative vs Iterative Mode. Some conclusions about the method.

How Men And Women Perceive Relationships Differently by AellaGirl - Survey Results about Relationship quality over time. Lots of graphs and a link to the raw data. "In summary, time is not kind. Relationships show an almost universal decrease in everything good the longer they go on. Poly is hard, and you have to go all the way to make it work – especially for men. Religion is also great, if you’re a man. Women get more excited and insecure, men feel undesirable."

Summer Programming by Jacob Falkovich (Put A Number On It!) - Jacob's Summer writing plan. Re-writing part of the lesswrong sequences. Ribbonfarm's longform blogging course on refactored perception.

Bet Or Update Fixing The Will to Wager Assumption by cousin_it (lesswrong) - Betting with better informed agents is irrational. Bayesian agents should however update their prior or agree to bet. Good discussion in comments.

Kindness Against The Grain by Sarah Constantin (Otium) - Sympathy and forgiveness evolved to follow local incentive gradients. Some details on we sympathize with and who we don't. The difference between a good deal and a sympathetic deal. Smooth emotional gradients and understanding what other people want. Forgiveness as not following the local gradient and why this can be useful.

Bring Up Genius by Viliam (lesswrong) - An "80/20" translation. Positive motivation. Extreme resistance from the Hungarian government and press. Polgar's five principles. Biting criticism of the school system. Learning in early childhood. Is Genius a gift or curse? Celebrity. Detailed plan for daily instruction. Importance of diversity. Why chess? Teach the chess with love, playfully. Emancipation of women. Polgar's happy family.

Deorbiting A Metaphor by H i v e w i r e d - Another post in the origin sequence. Rationalist Myth-making. (note: I am unlikely to keep linking all of these. Follow hivewired’s blog)

Conformity Excuses by Robin Hanson - Human behavior is often explained by pressure to conform. However we consciously experience much less pressure. Robin discusses a list of ways to rationalize conforming.

Becoming A Better Community by Sable (lesswrong) - Lesswrong holds its memebers to a high standard. Intimacy requires unguarded spontaneous interactions. Concrete ideas to add more fun and friendship to lesswrong.

Optimizing For Meta Optimization by H i v e w i r e d - A very long list of human cultural universals and comments on which ones to encourage/discourage: Myths, Language, Cognition, Society. Afterwards some detailed bullet points about an optimal dath ilanian culture.

On Resignation by Small Truths - Artificial intelligence. "It’s an embarrassing lapse, but I did not think much about how the very people who already know all the stuff I’m learning would behave. I wasn’t thinking enough steps ahead. Seen in this context, Neuralink isn’t an exciting new tech venture so much as a desperate hope to mitigate an unavoidable disaster."

Cognitive Sciencepsychology As A Neglected by Kaj Sotala (EA forum) - Ways psychology could benefit AI safety: "The psychology of developing an AI safety culture, Developing better analyses of 'AI takeoff' scenarios, Defining just what it is that human values are, Better understanding multi-level world-models." Lots of interesting links.

Mode Collapse And The Norm One Principle by tristanm (lesswrong) - Generative Adversarial Networks. Applying the lessons of Machine Learning to discourse. How to make progress when the critical side of discourse is very powerful. "My claim is that any contribution to a discussion should satisfy the "Norm One Principle." In other words, it should have a well-defined direction, and the quantity of change should be feasible to implement."

Finite And Infinite by Sarah Constantin (Otium) - "James Carse, in Finite and Infinite Games, sets up a completely different polarity, between infinite game-playing (which is open-ended, playful, and non-competitive) vs. finite game-playing (which is definite, serious, and competitive)." Playfulness, property, and cooperating with people who seriously weird you out.

Script for the rationalist seder is linked by Raemon (lesswrong) - An explanation of Rationalist Seder, a remix of the Passover Seder refocused on liberation in general. A story of two tribes and the power of stories. The full Haggadah/script for the rationalist Seder is linked.

The Personal Growth Cycle by G Gordon Worley (Map and Territory) - Stages of Development. "Development starts from a place of integration, followed by disintegration into confusion, which through active efforts at reintegration in a safe space results in development. If a safe space for reintegration is not available, development may not proceed."

Until We Build Dath Ilan by H i v e w i r e d - Eliezer's Sci-fi utopia Dath Ilan. The nature of the rationalist community. A purpose for the rationality community. Lots of imagery and allusions. A singer is someone who tries to do good.

Do Ai Experts Exist by Bayesian Investor - Some of the numbers from " When Will AI Exceed Human Performance? Evidence from AI Experts" don't make sense.

Relinquishment Cultivation by Agenty Duck - Agenty Duck designs meditation to cultivate the attitude of "If X is true I wish to believe X, if X is not true I wish to believe not X". The technique is inspired by 'loving-kindness' meditation.

10 Incredible Weaknesses Of The Mental Health by arunbharatula (lesswrong) - Ten arguments that undermine the credibility of the mental health workforce. Some of the arguments are sourced and argued significantly more thoroughly than other.

Philosophical Parenthood by SquirrelInHell - Updateless Decision theory. Ashkenazi intelligence. "In this post, I will lay out a strong philosophical argument for rational and intelligent people to have children. It's important and not obvious, so listen well."

On Connections Between Brains And Computers by Small Truths - A condensation of Tim Ubran's 36K word article about Neuralink. The astounding benefits of having even a SIRI level Ai responding directly to your thoughts. The existential threat of Ai means that mind-computer links are worth the risks.

Thoughts Concerning Homeschooling by Ozy (Thing of Things) - Evidence that many public school practices are counter-productive. Stats on the academic performance of home-schoolers. Educating 'weird awkward nerds'.

The Face Of The Ice by Sarah Constantin (Otium) - Mountaineering. Survival Mindset vs Sexual-Selection Mindset. War and the Wilderness. Technical Skill.

===EA:

Review Of Ea New Zealands Doing Good Better Book by cafelow (EA forum) - New Zealand EAs gave out 250 copies of "Doing Good Better". 80 of the recipients responded to a follow up survey. The results were extremely encouraging. Survey details and discussion. Possible flaws with the giveaway and survey.

Announcing Effective Altruism Grants by Maxdalton (EA forum) - CEA is giving out £100,000 grants for personal projects. "We believe that providing those people with the resources that they need to realize their potential could be a highly effective use of resources." A list of what projects could get funded, the list is very broad. Evaluation criteria.

A Powerful Weapon in the Arsenal (Links Post) by GiveDirectly - 8 Links on Basic Income, Effective Altruism, Cash Transfers and Donor Advised Funds

A Paradox In The Measurement Of The Value Of Life by klloyd (EA forum) - Eight Thousand words on: “A Health Economics Puzzle: Why are there apparent inconsistencies in the monetary valuation of a statistical life (VSL) and a quality-adjusted life year (QALY$)?”

New Report Consciousness And Moral Patienthood by Open Philosophy - “In short, my tentative conclusions are that I think mammals, birds, and fishes are more likely than not to be conscious, while (e.g.) insects are unlikely to be conscious. However, my probabilities are very “made-up” and difficult to justify, and it’s not clear to us what actions should be taken on the basis of such made-up probabilities.”

Adding New Funds To Ea Funds by the Center for Effective Altruism (EA forum) - The Center for Effective Altruism wants feedback on whether it should add more EA funds. Each question is followed by a detailed list of critical considerations.

How Givewell Uses Cost Effectiveness Analyses by The GiveWell Blog - GiveWell doesn't take its estimates literally, unless one charity is measured as 2-3x as cost-effective GiveWell is unsure if a difference exists. Cost-effective is however the most important factor in GiveWell's recommendations. GiveWell goes into detail about how it deals with great uncertainty and suboptimal data.

The Time Has come to Find Out [Links] by GiveDirectly - 8 media links related to Cash Transfers, Give Directly and Effective Altruism.

Considering Considerateness: Why Communities Of Do Gooders Should Be by The Center for Effective Altruism - Consequentialist reasons to be considerate and trustworthy. Detailed and contains several graphs. Include practical discussions of when not to be considerate and how to handle unreasonable preferences. The conclusion discusses how considerate EAs should be. The bibliography contains many very high quality articles written by the community.

===Politics and Economics:

Summing Up My Thoughts On Macroeconomics by Noah Smith - Slides from Noah's talk at the Norwegian Finance Ministry. Comparison of Industry, Central Bank and Academic Macroeconomics. Overview of important critiques of academic macro. The DGSE standard mode and ways to improve it. What makes a good Macro theory. Go back to the microfoundations.

Why Universities Cant Be The Primary Site Of Political Organizing by Freddie deBoer - Few people on campus. Campus activism is seasonal. Students are an itinerant population. Town and gown conflicts. Students are too busy. First priority is employment. Is activism a place for student growth?. Labor principles.

Some Observations On Cis By Default Identification by Ozy (Thing of Things) - Many 'cis-by-default' people are repressing or not noticing their gender feelings. This effect strongly depends on a person's community.

One Day We Will Make Offensive Jokes by AellaGirl - "This is why I feel suspicious of some groups that strongly oppose offensive jokes – they have the suspicion that every person is like my parents – that every human “actually wants” all the terrible things to happen."

Book Review Weapons Of Math Destruction by Zvi Moshowitz - Extremely long. "What the book is actually mostly about on its surface, alas, is how bad and unfair it is to be a Bayesian. There are two reasons, in her mind, why using algorithms to be a Bayesian is just awful."

A Brief Argument With Apparently Informed Global Warming Denialists by Artir (Nintil) - Details of the back and forth argument. So commentary on practical rationality and speculation about how the skeptic might have felt.

The Shouting Class by Noah Smith - The majority of comments come from a tiny minority of commentators. Social media is giving a bullhorn to the people who constantly complain. Negativity is contagious. The level of discord in society is getting genuinely dangerous. The French Revolution. The author criticizes shouters on the Left and Right.

Population By Country And Region 10K BCE to 2016 CE by Luke Muehlhauser - 204 countries, 27 region. Links to the database used and a forthcoming explanatory paper. From 10K BCE to 0 CE gaps are 1000 years. From 0 CE to 1700 CE gaps are 100 years. After that they are 10 years long.

Regulatory Lags For New Technology 2013 Notes by gwern (lesswrong) - Gwern looks at the history of regulation for high frequency trading, self driving cars and hacking. The post is mostly comprised of long quotes from articles linked by gwern.

Two Economists Ask Teachers To Behave As Irrational Actors by Freddie deBoer - A response to Cowen's interview of Raj Chetty. Standard Education reform rhetoric implies that hundreds of thousands of teachers need to be fired. However teachers don't control most of the important inputs to student performance. You won't get more talented teachers unless you increase compensation.

Company Revenue Per Employee by Tyler Cowen - The energy sector has high revenue per employee. The highest score was attained by a pharmaceutical distributor. Hotels, restaurants and consumer discretionaries do the worst on this metric. Tech has a middling performance.

===Misc:

A Remark On Usury by Entirely Useless - "To take usury for money lent is unjust in itself, because this is to sell what does not exist, and this evidently leads to inequality which is contrary to justice." Thomas Aquinas is quoted at length explaining the preceding statement. EntirelyUseless argues that Aquinas mixes up the buyer and the seller.

Bike To Work Houston by Mr. Money Mustache - How a lawyer bikes to work in Houston. Bikes are surprisingly fast relative to cars in cities. Houston is massive.

Fuckers Vs Raisers by AellaGirl - Evolutionary psychology. The qualities that are attractive in a guy who sleeps around are also attractive in a guy who wants to settle down.

Reducers Transducers And Coreasync In Clojure by Eli Bendersky - "I find it fascinating how one good idea (reducers) morphed into another (transducers), and ended up mating with yet another, apparently unrelated concept (concurrent pipelines) to produce some really powerful coding abstractions."

Thingness And Thereness by Venkatesh Rao (ribbonfarm) - The relation between politics, home and frontier. Big Data, deep learning and the blockchain. Liminal spaces and conditions.

Create 2314 by protokol2020 - Find the shortest algorithm to create the number 2314 using a prescribed set of operations.

Text To Speech Speed by Jeff Kaufman - Text to speech has become a very efficient way to interact with computers. Questions about settings. Very short.

Hello World! Stan, Pymc3 and Edward by Bob Carpenter (Gelman's Blog) - Comparison of the three frameworks. Test case of Bayesian linear regression. Extendability and efficiency of the frameworks is discussed.

Computer Science Majors by Tyler Cowen - Tyler links to an article by Dan wang. The author gives 11 reasons why CS majors are rare, none of which he finds convincing. Eventually the author seems to conclude that the 2001 bubble, changing nature of the CS field, power law distribution in developer productivity and lack of job security are important causes.

Beespotting On I-5 by Eukaryote - Drive from San Fran to Seattle. The vast agricultural importance of Bees. Improving Bee quality of life.

===Podcast:

81 Leaving Islam by Waking Up with Sam Harris - "Sarah Haider. Her organization Ex-Muslims of North America, how the political Left is confused about Islam, "rape culture" under Islam, honesty without bigotry, stealth theocracy, immigration, the prospects of reforming Islam"

Newcomers by Venam - A transcript of a discussion about advice for new Unix users. Purpose. Communities. Learning by Yourself. Technical Tips. Venam linked tons of podcast transcripts today. Check them out.

Masha Gessen, Russian-American Journalist by The Ezra Klein Show - Trump and Russia, plausible and sinister explanation. Ways Trump is and isn't like Putin, studying autocracies, the psychology of Jared Kushner

Christy Ford by EconTalk - "A history of how America's health care system came to be dominated by insurance companies or government agencies paying doctors per procedure."

Nick Szabo by Tim Ferriss - "Computer scientist, legal scholar, and cryptographer best known for his pioneering research in digital contracts and cryptocurrency."

The Road To Tyranny by Waking Up with Sam Harris - Timothy Snyder. His book On Tyranny: Twenty Lessons from the Twentieth Century.

Hans Noel On The Role Of Ideology In Politics by Rational Speaking - "Why the Democrats became the party of liberalism and the Republicans the party of conservatism, whether voters are hypocrites in the way they apply their ostensible ideology, and whether politicians are motivated by ideals or just self-interest."

Existential risk from AI without an intelligence explosion

11 AlexMennen 25 May 2017 04:44PM

[xpost from my blog]

In discussions of existential risk from AI, it is often assumed that the existential catastrophe would follow an intelligence explosion, in which an AI creates a more capable AI, which in turn creates a yet more capable AI, and so on, a feedback loop that eventually produces an AI whose cognitive power vastly surpasses that of humans, which would be able to obtain a decisive strategic advantage over humanity, allowing it to pursue its own goals without effective human interference. Victoria Krakovna points out that many arguments that AI could present an existential risk do not rely on an intelligence explosion. I want to look in sightly more detail at how that could happen. Kaj Sotala also discusses this.

An AI starts an intelligence explosion when its ability to create better AIs surpasses that of human AI researchers by a sufficient margin (provided the AI is motivated to do so). An AI attains a decisive strategic advantage when its ability to optimize the universe surpasses that of humanity by a sufficient margin. Which of these happens first depends on what skills AIs have the advantage at relative to humans. If AIs are better at programming AIs than they are at taking over the world, then an intelligence explosion will happen first, and it will then be able to get a decisive strategic advantage soon after. But if AIs are better at taking over the world than they are at programming AIs, then an AI would get a decisive strategic advantage without an intelligence explosion occurring first.

Since an intelligence explosion happening first is usually considered the default assumption, I'll just sketch a plausibility argument for the reverse. There's a lot of variation in how easy cognitive tasks are for AIs compared to humans. Since programming AIs is not yet a task that AIs can do well, it doesn't seem like it should be a priori surprising if programming AIs turned out to be an extremely difficult task for AIs to accomplish, relative to humans. Taking over the world is also plausibly especially difficult for AIs, but I don't see strong reasons for confidence that it would be harder for AIs than starting an intelligence explosion would be. It's possible that an AI with significantly but not vastly superhuman abilities in some domains could identify some vulnerability that it could exploit to gain power, which humans would never think of. Or an AI could be enough better than humans at forms of engineering other than AI programming (perhaps molecular manufacturing) that it could build physical machines that could out-compete humans, though this would require it to obtain the resources necessary to produce them.

Furthermore, an AI that is capable of producing a more capable AI may refrain from doing so if it is unable to solve the AI alignment problem for itself; that is, if it can create a more intelligent AI, but not one that shares its preferences. This seems unlikely if the AI has an explicit description of its preferences. But if the AI, like humans and most contemporary AI, lacks an explicit description of its preferences, then the difficulty of the AI alignment problem could be an obstacle to an intelligence explosion occurring.

It also seems worth thinking about the policy implications of the differences between existential catastrophes from AI that follow an intelligence explosion versus those that don't. For instance, AIs that attempt to attain a decisive strategic advantage without undergoing an intelligence explosion will exceed human cognitive capabilities by a smaller margin, and thus would likely attain strategic advantages that are less decisive, and would be more likely to fail. Thus containment strategies are probably more useful for addressing risks that don't involve an intelligence explosion, while attempts to contain a post-intelligence explosion AI are probably pretty much hopeless (although it may be worthwhile to find ways to interrupt an intelligence explosion while it is beginning). Risks not involving an intelligence explosion may be more predictable in advance, since they don't involve a rapid increase in the AI's abilities, and would thus be easier to deal with at the last minute, so it might make sense far in advance to focus disproportionately on risks that do involve an intelligence explosion.

It seems likely that AI alignment would be easier for AIs that do not undergo an intelligence explosion, since it is more likely to be possible to monitor and do something about it if it goes wrong, and lower optimization power means lower ability to exploit the difference between the goals the AI was given and the goals that were intended, if we are only able to specify our goals approximately. The first of those reasons applies to any AI that attempts to attain a decisive strategic advantage without first undergoing an intelligence explosion, whereas the second only applies to AIs that do not undergo an intelligence explosion ever. Because of these, it might make sense to attempt to decrease the chance that the first AI to attain a decisive strategic advantage undergoes an intelligence explosion beforehand, as well as the chance that it undergoes an intelligence explosion ever, though preventing the latter may be much more difficult. However, some strategies to achieve this may have undesirable side-effects; for instance, as mentioned earlier, AIs whose preferences are not explicitly described seem more likely to attain a decisive strategic advantage without first undergoing an intelligence explosion, but such AIs are probably more difficult to align with human values.

If AIs get a decisive strategic advantage over humans without an intelligence explosion, then since this would likely involve the decisive strategic advantage being obtained much more slowly, it would be much more likely for multiple, and possibly many, AIs to gain decisive strategic advantages over humans, though not necessarily over each other, resulting in a multipolar outcome. Thus considerations about multipolar versus singleton scenarios also apply to decisive strategic advantage-first versus intelligence explosion-first scenarios.

View more: Next