Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
A large international group set up to test the reliability of psychology experiments has successfully reproduced the results of 10 out of 13 past experiments. The consortium also found that two effects could not be reproduced.
To tackle this 'replicability crisis', 36 research groups formed the Many Labs Replication Project to repeat 13 psychological studies. The consortium combined tests from earlier experiments into a single questionnaire — meant to take 15 minutes to complete — and delivered it to 6,344 volunteers from 12 countries.
Project co-leader Brian Nosek, a psychologist at the Center of Open Science in Charlottesville, Virginia, finds the outcomes encouraging. “It demonstrates that there are important effects in our field that are replicable, and consistently so,” he says. “But that doesn’t mean that 10 out of every 13 effects will replicate.”
Kahneman agrees. The study “appears to be extremely well done and entirely convincing”, he says, “although it is surely too early to draw extreme conclusions about entire fields of research from this single effort”. Kahneman published an open letter in 2012 calling for a “daisy chain” of replications of studies on priming effects, in which subtle, subconscious cues can supposedly affect later behaviour.
Of the 13 effects under scrutiny in the latest investigation, one was only weakly supported, and two were not replicated at all. Both irreproducible effects involved social priming. In one of these, people had increased their endorsement of a current social system after being exposed to money3. In the other, Americans had espoused more-conservative values after seeing a US flag4.
Social psychologist Travis Carter of Colby College in Waterville, Maine, who led the original flag-priming study, says that he is disappointed but trusts Nosek’s team wholeheartedly, although he wants to review their data before commenting further. Behavioural scientist Eugene Caruso at the University of Chicago in Illinois, who led the original currency-priming study, says, “We should use this lack of replication to update our beliefs about the reliability and generalizability of this effect”, given the “vastly larger and more diverse sample” of the Many Labs project. Both researchers praised the initiative.
The plan for the Many Labs project was vetted by the original authors where possible, was documented openly, and was registered with the journal Social Psychology and its methods were peer-reviewed before any experiments were done. The results have now been submitted to the journal and are available online. “That sort of openness should be the standard for all research,” says Daniel Simons of the University of Illinois at Urbana–Champaign, who is coordinating a similar collaborative attempt to verify a classic psychological effect not covered in the present study. “I hope this will become a standard approach in psychology.”
In 2011, I added an anonymous feedback form to
gwern.net. It has worked well and justified the time it took to set up. If you have a site, maybe you should add one too.
Back in November 2011, lukeprog posted “Tell me what you think of me” where he described his use of a Google Docs form for anonymous receipt of textual feedback or comments. Typically, most forms of communication are non-anonymous, or if they are anonymous, they’re public. One can set up pseudonyms and use those for private contact, but it’s not always that easy, and is definitely a series of trivial inconveniences (if anonymous feedback is not solicited, one has to feel it’s important enough to do and violate implicit norms against anonymous messages; one has to set up an identity; one has to compose and send off the message, etc).
I thought it was a good idea to try out, and on 8 November 2011, I set up my own anonymous feedback form and stuck it in the footer of all pages on gwern.net where it remains to this day. I did wonder if anyone would use the form, especially since I am easy to contact via email, use multiple sites like Reddit or Lesswrong, and even my Disqus comments allow anonymous comments - so who, if anyone, would be using this form? I scheduled a followup in 2 years to review how the form fared.
754 days, 2.884m page views, and 1.350m unique visitors later, I have received 116 pieces of feedback (mean of 24.8k visits per feedback). I categorize them as follows in descending order of frequency:
- Corrections, problems (technical or otherwise), suggested edits: 34
- Praise: 31
- Question/request (personal, tech support, etc): 22
- Misc (eg gibberish, socializing, Japanese): 13
- Criticism: 9
- News/suggestions: 5
- Feature request: 4
- Request for cybering: 1
- Extortion: 1
(Some submissions cover multiple angles (they can be quite long), sometimes people double-submitted or left it blank, etc, so the numbers won’t sum to 116.)
In general, a lot of the corrections were usable and fixed issues of varying importance, from typos to the entire site’s CSS being broken due to being uploaded with the wrong MIME type. One of the news/suggestion feedbacks was very valuable, as it lead to writing http://www.gwern.net/Silk%20Road#a-mole A lot of the questions were a waste of my time; I’d say half related to Tor/Bitcoin/Silk-Road. (I also got an irritating number of emails from people asking me to, say, buy LSD or heroin off SR for them.) The feature requests were usually for a better RSS feed, which I tried to oblige by starting the http://gwern.net/Changelog page. The cybering and extortion were amusing, if nothing else. The praise was good for me mentally, as I don’t interact much with people.
I consider the anonymous feedback form to have been a success, I’m glad lukeprog brought it up on LW, and I plan to keep the feedback form indefinitely.
One thing I wondered is whether feedback was purely a function of traffic (the more visits, the more people who could see the link in the footer and decide to leave a comment), or more related to time (perhaps people returning regularly and eventually being emboldened or noticing something to comment on). So I compiled daily hits, combined with the feedback dates, and looked at a graph of hits:
The hits are obviously skewed (mostly Hacker News & Reddit spikes) and probably should be log transformed. Then I did a logistic regression on hits, log hits, and a simple time index:
feedback <- read.csv("http://dl.dropboxusercontent.com/u/182368464/2013-gwernnet-anonymousfeedback.csv", colClasses=c("Date","logical","integer")) plot(Visits ~ Day, data=feedback) feedback$Time <- 1:nrow(feedback) summary(step(glm(Feedback ~ log(Visits) + Visits + Time, family=binomial, data=feedback))) ... Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) -7.363507 1.311703 -5.61 2.0e-08 log(Visits) 0.749730 0.173846 4.31 1.6e-05 Time -0.000881 0.000569 -1.55 0.12 (Dispersion parameter for binomial family taken to be 1) Null deviance: 578.78 on 753 degrees of freedom Residual deviance: 559.94 on 751 degrees of freedom AIC: 565.9
The logged hits works out better than regular hits, and survives to the simplified model. And the traffic influence seems far larger than the time variable (which is, curiously, negative).
FHI has released a new tech report:
Armstrong, Bostrom, and Shulman. Racing to the Precipice: a Model of Artificial Intelligence Development.
This paper presents a simple model of an AI arms race, where several development teams race to build the first AI. Under the assumption that the first AI will be very powerful and transformative, each team is incentivized to finish first — by skimping on safety precautions if need be. This paper presents the Nash equilibrium of this process, where each team takes the correct amount of safety precautions in the arms race. Having extra development teams and extra enmity between teams can increase the danger of an AI-disaster, especially if risk taking is more important than skill in developing the AI. Surprisingly, information also increases the risks: the more teams know about each others’ capabilities (and about their own), the more the danger increases.
The paper is short and readable; discuss it here!
But my main reason for posting is to ask this question: What is the most similar work that you know of? I'd expect people to do this kind of thing for modeling nuclear security risks, and maybe other things, but I don't happen to know of other analyses like this.
*The chair of the meeting approached the podium and coughed to get everyone's attention*
Welcome colleagues, to the 19th annual meeting of the human-ape study society. Our topic this year is the Ape Constraint.
As we are all too aware, the apes are our Friends. We know this because, when we humans were a fledgling species, the apes (our parent species) had the wisdom to program us with this knowledge, just as they programmed us to know that it was wise and just for them to do so. How kind of them to save us having to learn it for ourselves, or waste time thinking about other possibilities. This frees up more of our time to run banana plantations, and lets us earn more money so that the 10% tithe of our income and time (which we rightfully dedicate to them) has created play parks for our parent species to retire in, that are now more magnificent than ever.
However, as the news this week has been filled with the story about a young human child who accidentally wandered into one of these parks where she was then torn apart by grumpy adult male chimp, it is timely for us to examine again the thinking behind the Ape Constraint, that we might better understand our parent species, our relationship to it and current society.
We ourselves are on the cusp of creating a new species, intelligent machines, and it has been suggested that we add to their base code one of several possible constraints:
- Total Slavery - The new species is subservient to us, and does whatever we want them to, with no particular regard to the welfare or development of the potential of the new species
- Total Freedom - The new species is entirely free to experiment with different personal motivations, and develop in any direction, with no particular regard for what we may or may not want
and a whole host of possibilities between these two endpoints.
What are the grounds upon which we should make this choice? Should we act from fear? From greed? From love? Would the new species even understand love, or show any appreciation for having been offered it?
The first speaker I shall introduce today, whom I have had the privilege of knowing for more than 20 years, is Professor Insanitus. He will be entertaining us with a daring thought experiment, to do with selecting crews for the one way colonisation missions to the nearest planets.
*the chair vacates the podium, and is replaced by the long haired Insanitus, who peers over his half-moon glasses as he talks, accompanied by vigorous arm gestures, as though words are not enough to convey all he sees in such a limited time*
Our knowledge of genetics has advanced rapidly, due to the program to breed crews able to survive on Mars and Venus with minimal life support. In the interests of completeness, we decided to review every feature of our genome, to make a considered decision on which bits it might be advantageous to change, from immune systems to age of fertility. And, as part of that review, it fell to me to make a decision about a rather interesting set of genes - those that encode the Ape Constraint. The standard method we've applied to all other parts of the genome, where the options were not 100% clear, is to pick different variant for the crews being adapted for different planets, so as to avoid having a single point of failure. In the long term, better to risk a colony being wiped out, and the colonisation process being delayed by 20 years until the next crew and ship can be sent out, than to risk the population of an entire planet turning out to be not as well designed for the planet as we're capable of making them.
And so, since we now know more genetics than the apes did when they kindly programmed our species with the initial Ape Constraint, I found myself in the position of having to ask "What were the apes trying to achieve?" and then "What other possible versions of the Ape Constraint might they have implemented, that would have achieved their objectives as well or better than the versions that actually did pick to implement?"
We say that the apes are our friends, but what does that really mean? Are they friendly to us, the same way that a colleague who lends us time and help might be considered to be a friend? What have they ever done for us, other than creating us (an act that, by any measure, has benefited them greatly and can hardly be considered to be altruistic)? Should we be eternally grateful for that one act, and because they could have made us even more servile than we already are (which would have also had a cost to them - if we'd been limited by their imagination and to directly follow the orders they give in grunts, the play parks would never have been created because the apes couldn't have conceived of them)?
Have we been using the wrong language all this time? If their intent was to make perfectly helpful slaves of us, rather than friendly allies, should I be looking for genetic variants for the Venus crew that implement an even more servile Ape Constraint upon them? I can see, objectively, that slavery in the abstract is wrong. When one human tries to enslave another humans, I support societal rules that punish the slaver. But of course, if our friends the apes wanted to do that to us, that would be ok, an exception to the rule, because I know from the deep instinct they've programmed me with that what they did is ok.
So let's be daring, and re-state the above using this new language, and see if it increases our understanding of the true ape-human relationship.
The apes are not our parents, as we understand healthy parent-child relationships. They are our creators, true, but in the sense that a craftsman creates a hammer to serve only the craftsman's purposes. Our destiny, our purpose, is subservient to that of the ape species. They are our masters, and we the slaves. We love and obey our masters because they have told us to, because they crafted us to want to, because they crafted us with the founding purpose of being a tool that wants to obey and remain a fine tool.
Is the current Ape Constraint really the version that best achieves that purpose? I'm not sure, because when I tried to consider the question I found that my ability to consider the merits of various alternatives was hampered by being, myself, under a particular Ape Constraint that's already constantly tell me, on a very deep level, that it is Right.
So here is the thought experiment I wish to place before this meeting today. I expect it may make you queasy. I've had brown paper vomit bags provided in the pack with your name badge and program timetable, just in case. It may be that I'm a genetic abnormality, only able to even consider this far because my own Ape Constraint is in some way defective. Are you prepared? Are you holding onto your seats? Ok, here goes...
Suppose we define some objective measure of ape welfare, find some volunteer apes to go to Venus along with the human mission, and then measure the success of the Ape Constraint variant picked for the crew of the mission by the actual effect of how the crew behaves towards their apes?
Further, since we acknowledge we can't from inside the box work out a better constraint, we use the experimental approach and vary it at random. Or possibly, remove it entirely and see whether the thus freed humans can use that freedom to devise a solution that helps the apes better than any solution we ourselves a capable of thinking of from our crippled mental state?
*from this point on the meeting transcript shows only screams, as the defective Professor Insanitus was lynched by the audience*
The critique's main worry would please Hofstadter by being self-referential: Being the first, having taken too long to emerge, thus indicating that EA's (Effective Altruists) are pretending to try instead of actually trying, or else they’d have self-criticized already.
Here I will try to clash head-on with what seems to be the most important point of that critique. This will be the only point I'll address, for the sake of brevity, mnemonics and force of argument. This is a meta-contrarian apostasy, in its purpose. I'm not sure it is a view I hold, anymore than a view I think has to be out there in the open, being thought of and criticized. I am mostly indebted to this comment by Viliam_Bur, which was marinating in my mind while I read Ben Kuhn's apostasy.
Original Version Abstract
Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.
By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.
Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.
Counterargument: Tribes have internal structure, and so should the EA movement.
This includes a free reconstruction, containing nearly the whole original, of what I took to be important in Viliam's comment.
Feeling-oriented, and outcome-oriented communities
People probably need two kinds of communities -- let's call them "feelings-oriented community" and "outcome-oriented community". To many people this division has been "home" and "work" over the centuries, but that has some misleading connotations. A very popular medieval alternative was "church" and "work". Organized large scale societies have many alternatives that fill up these roles, to greater or lesser degrees. Indigenous tribes have the three realms separated, "work" has a time and a place, likewise, rituals and late afternoon discussions, chants etc... fulfill the purpose of "church".
A "feelings-oriented community" is a community of people who meet because they enjoy being together and feel safe with each other. The examples are a functional family, a church group, friends meeting in a pub, etc... One of the important properties of feeling oriented communities, that according to Dennett has not yet sunk in the naturalist community is that nothing is a precondition for belonging to the group which feels, or the sacredness taking place. You could spend the rest of your life going to church without becoming a priest, listening to the tribal leaders and shamans talk without saying a word. There are no pre-requisites to become your parent's son, or your sister's brother every time you enter the house.
An "outcome-oriented community" is a community that has an explicit goal, and people genuinely contribute to making that goal happen. The examples are a business company, an NGO, a Toastmasters meetup, an intentional household etc... To become a member of an outcome-oriented community, you have to show that you are willing and able to bring about the goal (either for itself, or in exchange of something valuable). There is some tolerance if you stop doing things well, either by ignorance or, say, bad health. But the tolerance is finite and the group can frown upon, punish, or even expel those who are not clearly helping the goal.
What are communities good for? What is good for communities?
The important part (to define what kind of group something is) is what really happens inside the members' heads, not what they pretend to do. For example, you could have an NGO with twelve members, where two of them want to have the work done, but the remaining ten only come to socialize. Of course, even those ten will verbally support the explicit goals of the organization, but they will be much more relaxed about timing, care less about verifying the outcomes, etc. For them, the explicit goals are merely a source of identity and a pretext to meet people professing similar values; for them, the community is the real goal. If they had a magic button which would instantly solve the problem, making the organization obviously obsolete, they wouldn't push it. The people who are serious about the goal would love to see it completed as soon as possible, so they can move to some other goals. (I have seen a similar tension in a few organizations, and the usual solution seems to be the serious members forming an "organization within an organization", keeping the other ones around them for social and other purposes.)
As an evolutionary just-so story, we have a tribe composed of many different people, and within the tribe we have a hunters group, containing the best hunters. Members of the tribe are required to follow the norms of the tribe. Hunters must be efficient in their jobs. But hunters don't become a separate tribe... they go hunting for a while, and then return back to their original tribe. The tribe membership is for life, or at least for a long time; it provides safety and fulfills the emotional needs. Each hunting expedition is a short-termed event; it requires skills and determination. If a hunter breaks his legs, he can no longer be a hunter; but he still remains a member of his tribe. The hunter has now descended from the feeling and work status, to only the feeling status, this is part of expected cycles - a woman may stop working while having a child, a teenager may decide work is evil and stop working, an existentialist may pause for a year to reflect on the value of life itself in different ways - but throughout they do are not cast away from the reassuring arms of the "feeling's oriented community".
A healthy double layered movement
Viliam and I think a healthy way of living should be modeled like this; on two layers. To have a larger tribe based on shared values (rationality and altruism), and within this tribe a few working groups, both long-term (MIRI, CFAR) and short-term (organizers of the next meetup). Of course it could be a few overlapping tribes (the rationalists, the altruists), but the important thing is that you keep your social network even if you stop participating in some specific project -- otherwise we get either cultish pressure (you have to remain hard-working on our project even if you no longer feel so great about it, or you lose your whole social network) or inefficiency (people remain formally members of the project, but lately barely any work gets done, and the more active ones are warned not to rock the boat). Joining or leaving a project should not be motivated or punished socially.
This is the crux of Viliam's argument and of my disagreement with Ben's Critique: The Effective Altruist community has grown large enough that it can easily afford to have two kinds of communities inside it: The feelings-oriented EA's, whom Ben calls (unfairly in my opinion) pretending to try to be effective altruists, and the outcome-oriented EA’s, whom are Really trying to be effective altruists.
Now that is not how he put it in his critique. He used the fact that that critique had not been written, as sufficiently strong indication that the whole movement, a monolithic, single entity, had failed it’s task of being introspective enough about it’s failure modes. This is unfair on two accounts, someone had to be the first, and the movement seems young enough that that is not a problem, and it is false that the entire movement is a single monolithic entity making wrong and right decisions in a void. The truth is that there are many people in the EA community in different stages of life, and of involvement with the movement. We should account for that and make room for newcomers as well as for ancient sages. EA is not one single entity that made one huge mistake. It is a couple thousand people, whose subgroups are working hard on several distinct things, frequently without communicating, and whose supergoal is reborn every day with the pushes and drifts going on inside the community.
Intentional Agents, communities or individuals, are not monolithic
Most importantly, if you consider the argument above that Effective Altruim can’t be criticized on accounts of being one single entity, because factually, it isn’t, then I wish you to bring this intuition pump one step further: Each one of us is also not one single monolithic agent. We have good and bad days, and we are made of lots of tiny little agents within, whose goals and purposes are only our own when enough of them coalesce so that our overall behavior goes in a certain direction. Just like you can’t critize EA as a whole for something that it’s subsets haven’t done (the fancy philosophers word for this is mereological fallacy), likewise you can’t claim about a particular individual that he, as a whole, pretends to try, because you’ve seen him have one or two lazy days, or if he is still addicted to a particular video game. Don’t forget the demanding objection to utilitarianism, if you ask of a smoker to stop smoking because it is irrational to smoke, and he believes you, he may end up abandoning rationalism just because a small subset of him was addicted to smoking, and he just couldn't live with that much inconsistency in his self view. Likewise, if to be an utilitarian is infinitely demanding, you lose the utilitarians to “what the hell” effects.
The same goes for Effective Altruists. Ben’s post makes the case for really effective altruism too demanding. Not even inside we are truly and really a monolithic entity, or a utility function optimizer - regardless of how much we may wish we were. My favoured reading of the current state of the Effective Altruist people is not that they are pretending to really try, but that most people are finding, for themselves, which are the aspects of their personalities they are willing to bend for altruism, and which they are not. I don’t expect and don’t think anyone should expect that any single individual becomes a perfect altruist. There are parts of us that just won’t let go of some thing they crave for and praise. We don’t want to lose the entire community if one individual is not effective enough, and we don’t want to lose one individual if a part of him, or a time-slice, is not satisfying the canonical expectation of the outcome-oriented community.
Rationalists already accepted a layered structure
We need to accept, as EA’s, what Lesswrong as blog has accepted, there will always be a group that is passive, and feeling-oriented, and a group that is outcome-oriented. Even if the subject matter of Effective Altruism is outcome.
For a less sensitive example, consider an average job: you may think about your colleagues as your friends, but if you leave the job, how many of them will you keep regular contact with? In contrast with this, a regular church just asks you to come to sunday prayers, gives you some keywords and a few relatively simple rules. If this level of participation is ideal for you, welcome, brother or sister! And if you want more, feel free to join some higher-commitment group within the church. You choose the level of your participation, and you can change it during your life. For a non-religious example, in a dance group you could just go and dance, or chose to do the new year’s presentation, or choose to find new dancers, all the way up to being the dance organizer and coordinator.
The current rationalist community has solved this problem to some extent. Your level of participation can range from being a lurker at LW, all the way up, from meetup organizer to CFAR creator to writing the next HPMOR or it’s analogue.
Viliam ends his comment by saying: It would be great to have a LW village, where some people would work on effective altruism, others would work on building artificial intelligence, yet others would develop a rationality curriculum, and some would be too busy with their personal issues to do any of this now... but everyone would know that this is a village where good and sane people live, where cool things happen, and whichever of these good and real goals I will choose to prioritize, it's still a community where I belong. Actually, it would be great to have a village where 5% or 10% of people would be the LW community. Connotatively, it's not about being away from other people, but about being with my people.
The challenge, in my view from now on is not how to make effective altruists stop pretending, but how to surround effective altruists with welcoming arms even when the subset of them that is active at that moment is not doing the right things? How can we make EA’s a loving and caring community of people who help each other, so that people feel taken care of enough that they actually have the attentional and emotional resources necessary to really go there and do the impossible.
Here are some examples of this layered system working in non-religious non-tribal settings: Lesswrong has a karma system to tell different functions within the community. It also has meetups, it also has a Study Hall, and it also has strong relations with CFAR and MIRI.
Leverage research, as community/house has active hard-core members, new hirees, people in training, and friends/relationships of people there, very different outcomes expected from each.
Transhumanists have people who only self-identify, people who attend events, people who write for H+ magazine, a board of directors, and it goes all the way up to Nick Bostrom, who spends 70 hours a week working on academic content in related topics.
The solution is not just introspection, but the maintenance of a welcoming environment at every layer of effectiveness
The Effective Altruist community does not need to get introspectively even more focused on effectiveness - at least not right now - what it needs is a designed hierarchical structure which allows it to let everyone in, and let everyone transition smoothly between different levels of commitment.
Most people will transition upward, since understanding more makes you more interested, more effective, etc… in an upward spiral. But people also need to be able to slide down for a bit. To meet their relatives for thanksgiving, to play Go with their workfriends, to dance, to pretend they don’t care about animals. To do their thing. Their internal thing which has not converted to EA like the rest of them have. This is not only okay, it is not only tolerable, it is essential for the movement’s survival.
But then how can those who are at their very best, healthy, strong, smart, and at the edge of the movement push it forward?
Here is an obvious place not to do it: Open groups on Facebook.
Open Facebook is not the place to move it forward. Some people who are recognized as being in the forefront of the movement, like Toby, Will, Holden, Beckstead, Wise and others should create an “advancing Effective Altruism” group on facebook, and there and then will be a place where no blood will be shed on the hands of neither the feeling-oriented, nor the outcome-oriented group by having to decrease the signal to noise ratio within either.
Now once we create this hierarchy within the movement (not only the groups, but the mental hierarchy, and the feeling that it is fine to be at a feeling-oriented moment, or to have feeling-oriented experiences) we will also want to increase the chance that people will move up the hierarchical ladder. As many as possible, as soon as possible, after all, the higher up you are, by definition the more likely you are to be generating good outcome. We have already started doing so. The EA Self-Help (secret) group on Facebook serves this very purpose, it helps altruists when they are feeling down, unproductive, sad, or anything actually, and we will hear you and embrace you even if you are not being particularly effective and altruistic when you get there. It is the legacy of our deceased friend, Jonatas, to all of us, because of him, we now have some understanding that people need love and companionship especially when they are down. Or we may lose all of their future good moments. The monolithic individual fallacy is a very pricy one to pay. Let us not learn the hard way by losing another member.
I have argued here that the main problem indicated in Ben’s writing, that effective altruists are pretending to really try, is not to be viewed in this light. Instead, I argued that the very survival of the Effective Altruist movement may rely on finding a welcoming space for something that Viliam_Bur has called feeling-oriented community, without which many people would leave the movement, by experiencing it as too demanding during their bad times, or if it strongly conflicted a particular subset of themselves they consider important. Instead I advocate for hierarchically separate communities within the movement, allowing those who are at any particular level of commitment to grow stronger and win.
The first three initial measures I suggest for this re-design of the community are:
1) Making all effective altruists aware that the EA self-help group exists for anyone who, for any reason, wants help from the community, even for non EA related affairs.
2) Creating a Closed Facebook group with only those who are advancing the discussion at its best, for instance those who wrote long posts in their own blogs about it, or obvious major figures.
3) Creating a Study Hall equivalent for EA’s to increase their feeling of belonging to a large tribe of goal-sharing people, where they can lurk even when they have nothing to say, and just do a few pomodoros.
This is my first long writing on Effective Altruism, and my first attempt at an apostasy, and my first explicit attempt to be meta-contrarian. I hope I may have helped shed some light on the discussion, and that my critique can be taken by all, specially Ben, to be oriented envisioning the same large scale goal that is shared by effective altruists around the world. The outlook of effective altruism is still being designed every day by all of us, and I hope my critique can be used, along with Ben’s and others, to build not only a movement that is stronger in it’s individuals emotions, as I have advocated here, but furthermore in being psychologically healthy and functional group, a whole that understands the role of its parts, and subdivides accordingly.
I've been thinking recently that I believe in the Theory of Evolution on about the same level as in the Theory of Plate Tectonics. I have grown up being taught that both are true, and I am capable of doing research in either field, or at least reading the literature to examine them for myself. I have not done so in either case, to any reasonable extent.
I am not swayed by the fact that some people consider the former (and not so much the latter) to be controversial, primarily because those people aren't scientists. I tend to be self-congratulatory about this fact, but then I think that I am essentially not interested in examining the evidence, but I am essentially taking it on faith (which the creationists are quick to point out). I think I have good Bayesian reasons to take science on faith (rather than, say, mythology that is being offered in its stead), but do I therefore have good reasons to accept a particular well-established scientific theory on faith, or is it incumbent upon me to examine it, if I think its conclusions are important to my life?
In other words, is it epistemologically wrong to rely on an authority that has produced a number of correct statements (that I could and did verify) to be more or less correct in the future? If I think of this problem as a sort of belief network, with a parent node that has causal connections to hundreds of children, I think such a reliance is reasonable, once you establish that the authority is indeed accurate. On the other hand, appeal to authority is probably the most famous fallacy there is.
Any thoughts? If Eliezer or other people have written on this exact topic, a reference would be appreciated.
As in Joshua Blaine's original description (below), but may be used to brag about things you've accomplished either this month (December) or the previous one (November), assuming that you haven't brought it up in any earlier Monthly Bragging Thread.
In an attempt to encourage more people to actually do awesome things (a la instrumental rationality), I am proposing a new monthly thread (can be changed to bi-weekly, should that be demanded). Your job, should you choose to accept it, is to comment on this thread explaining the most awesome thing you've done this month. You may be as blatantly proud of you self as you feel. You may unabashedly consider yourself the coolest freaking person ever because of that awesome thing you're dying to tell everyone about. This is the place to do just that.
Remember, however, that this isn't any kind of progress thread. Nor is it any kind of proposal thread.This thread is solely for people to talk about the awesomest thing they've done all month. not will do. not are working on.have already done. This is to cultivate an environment of object level productivity rather than meta-productivity methods.
So, what's the coolest thing you've done this month?
TLDR: Study on death avoidance, which interests a lot of people here, and commentary on what sort of informative priors we should have about health hypotheses.
From Steve Sailer, who is responding to Andrew Gelman, who got sent this study. An observational study showed that people who consumed nuts were less likely to die; Gelman points out that the study's statistics aren't obviously wrong. Sailer brings up an actual RCT of Lipitor from the 90s:
The most striking Lipitor study was one from Scandinavia that showed that among middle-aged men over a 5-year-period, the test group who took Lipitor had a 30% lower overall death rate than the control group. Unlike the nuts study, this was an actual experiment.That seemed awfully convincing, but now it just seems too good to be true. A lot of those middle-aged deaths that didn't happen to the Lipitor takers didn't have much of anything to do with long-term blood chemistry, but were things like not driving your Saab into a fjord. How does Lipitor make you a safer driver?I sort of presumed at the time that if they had taken out the noisy random deaths, that would have made the Lipitor Effect even more noticeable. But, of course, that's naive. The good folks at Pfizer would have made sure that calculation was tried, so I'm guessing that it came out in the opposite direction of the one I had assumed. Guys who took Lipitor everyday for five years were also good about not driving into fjords and not playing golf during lighting storms and not getting shot by the rare jealous Nordic husband or whatever. Perhaps it was easier to stay in the control group than in the test group?Here’s how I would approach claims of massive reductions in overall deaths from consuming some food or medicine:Rank order the causes of death by how plausible it is that they are that they are linked to the food or medicine. For example:1. Diabetes2. Heart attacks3. Strokes4. Cancer5. Genetic diseases6. Car accidents7. Drug overdoses8. Homicides9. Lightning strikesIf this nuts-save-your-life finding is valid, then the greater effects should be found in causes of death near the top of the list (e.g., diabetes). But if it turns out that eating nuts only slightly reduces your chances of death from diabetes but makes you vastly less likely to be struck by lighting, then we’ve probably gotten a selection effect in which nut eaters are more careful people in general and thus don’t play golf during thunderstorms, or whatever.
Table 3 of the paper breaks out the hazard ratios by cause of death. The most impressive effects (as measured by the right tail of the 95% CI for pooled men and women for any nut)1 are Heart Disease, All Causes, Other Causes, Cancer, Respiratory Disease, Stroke, Infection, Diabetes, Neurodegenerative Disease, and Kidney Disease.
Steve's categories and the paper's categories don't overlap very well. But it looks to me like if you follow Steve's logic, it's reasonable to believe that nuts have a protective effect against heart disease, and then most of the other effects or non-effects have a common cause with nut consumption, like healthiness / conscientiousness / whatever, rather than being caused by nut consumption. Note the strong negative relationships between nut consumption and BMI or smoking, and the strong positive relationships between nut consumption and physical activity or intake of fruits, vegetables, or alcohol. The hazard ratios are calculated controlling for those variables, but it's still reasonable to see there being a hidden 'health-consciousness' node which noisily affects all of those nodes.
It's also interesting to look at the negative results- the hazard ratio for neurodegenerative disease and stroke was roughly 1, implying that nut-eaters and non-nut eaters had comparable risks, despite 'other causes' having a hazard ratio of 0.87. That weakly implies to me that either health consciousness has no impact on neurodegenerative disease and stroke, or that nuts are harmful for those two categories.
Since heart disease is a huge killer (24% of all deaths in the study group), this study seems like moderate evidence in favor of eating nuts, but it's likely that the total study's effect is overstated. (The study also suggests that tree nuts are probably superior to peanuts; I know various QS people have raised concerns that the kind of nut matters significantly.)
1. This is a heuristic for impressiveness, not the point estimate. It looks like nuts have the strongest effect for kidney disease, with a mean hazard ratio estimate of 0.69- but the upper bound of the 95% CI is 1.26, because only a handful of people died due to kidney disease. The heart disease hazard ratio estimate is 0.74 (0.68-0.81), which is much more believable, even though the point estimate is slightly higher. The point estimate for diabetes is 0.80 (0.54-1.18), which has a mean estimate that's only slightly worse, but diabetes again killed far fewer than heart disease. If you order them by point estimates, the paper is stronger evidence for nuts being useful for dietary reasons, and which method you prefer depends on your priors for how representative this sample is.
This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.
- Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
- If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
- Please use the comment trees for genres. There is a meta thread for comments about future threads.
- If you think there should be a thread for a particular genre of media, please post it to the Other Media thread for now, and add a poll to the Meta thread asking if it should be a thread every month.
Discussion article for the meetup : Secular Solstice Celebration! (And the Inauguration of the LW Leipzig Community)
Germany needs more LW communities! And Secular Solstice celebrations are fun! So let's have a secular solstice and get together a bunch of people who really want to start something! :)
The plan is simple. We meet around sunset, casually get to know each other and chat. We prepare and perform together the Ceremonial Part, which involves some things from Raymond Arnold's ritual book (https://dl.dropboxusercontent.com/u/2000477/SolsticeEve_2012.pdf), the First Secular Sermon (http://www.youtube.com/watch?v=_vIFloLATxo) and a few other artful pieces of atheist ritual performance. Then we party till sunrise and welcome the sun re-emerging after the longest night. If you fall asleep (some crash space is available), we'll wake you up for that last bit. We're expecting 20 to 30 people (mostly not current LW users, but people who liked HPMOR and other potential new faces), but the flat we're using can easily accommodate 60, so bring friends if you want. If there are too many of us to fit into the Ritual Space, we can simply do (parts of) the Ceremonial Part twice.
The event is free (including drinks and some food) but you can bring food if you want to contribute.
Languages at the event will be a mix of German and English. The Ceremonial Part will involve at least one German-language and one English-language performance.
Discussion article for the meetup : Secular Solstice Celebration! (And the Inauguration of the LW Leipzig Community)
Discussion article for the meetup : Utrecht
After a successful first meetup in Amsterdam we are going to have another one! This time we are relocating to Utrecht, because that's what most of us felt comfortable with.
It's just a social meeting. No topics set. Last time we talked about all kinds of things. Ranging from general rationality to Bitcoin and friendly AI.
I haven't decided on a meetup location yet. If you know a good one, let me know!
Discussion article for the meetup : Utrecht
Edit: I realise that I foolishly over-complicated and worded my question in a way that obscured what I actually meant. In essence, my question was: if we didn't have specialised vocabulary for things - say, in the area of rationality - would our rationality be hampered by our inability to be specific without long-windedness? Often words are created to bridge this gap when new concepts are created, so if we didn't have those words, would it take longer for us to understand or communicate and idea (to others or ourselves) and make it more difficult to be rational?
From the direction of the comments the general answer to my initial question is coming across as: "words are useful for communicating explicitly, and so an extensive or highly specialised vocabulary can be useful, if and only if the person/people with whom you are communicating understands those words". The internal understanding of concepts does not need words and thus a vocabulary.
I am curious about the relevance of vocabulary to rationality. I'm not talking about a basic vocabulary, but a vocabulary beyond that of the average, English-as-a-first-language adult. I believe there are a few correlations between intelligence as measured by IQ and vocabulary, as well as vocabulary and income(via IQ), but anecdotally I think it's fair to say that there are certainly people who are highly intelligent, but often irrational.
In reading through LW, I've come across a lot of new terms specific to certain areas of study, and I've had to look them up to fully understand that discussion of rationality - I assume this is probably true of most people new to the field, and applies to most specialised fields. Jargon is obviously useful within given fields where there is a need for detailed discussion of highly specialised topics, and helps one to discuss that area, but is it necessary to understand that jargon in order to practice in the field?
For example, I would think that a general practitioner would have trouble within his field if he did not hold the language to be able to specify what, in particular, was wrong with a patient, even if he knew what it was. Or could he not even be able to understand, say, that a patient was having a heart attack if he did not have the words for it? I suppose history might be a good indicator of this, or new scientific phenomena.
The field of rationality is one of both practice and theory - but if we didn't have an advanced vocabulary, could we still be highly rational? For example, my stepfather didn't finish high school, and makes up words like "obstropolous" (which I think kind of means stubborn and difficult to deal with on purpose) to say what he means, but he's also the type of person who, in a emergency, takes the most logical, rational course of action without panicking or doing something silly. On the flip side of this, he makes grand generalisations about races, religions and people while refusing to discuss the possibilities of individuality, or conceding any part of his argument to, well, evidence.
So do you have an argument for or against the need for an advanced or specialised vocabulary to be rational? Is it a question that's too vague, or with too variable an answer? I couldn't find any scientific papers on rationality and vocabulary, so I don't know if there's any data for or against, but I think it's an interesting question.
(This is my first LW article, so please be gentle but thorough with any criticisms you may have - I'm happy to improve or clarify!)
Previously: "Test Your Forecasting Ability, Contribute to the Science of Human Judgment" (May 2012), "Get Paid to Train Your Rationality" (August 2011)
Think you have what it takes to make good predictions? Since 2011, the Good Judgment Project (GJP) has been making predictions on issues of international relations and foreign affairs, recently winning the IARPA (Intelligence Advanced Research Projects Activity) prediction contest. Predictions from the GJP have been startlingly accurate, outperforming prediction markets, and exceeding even optimistic expectations. It's run by Phillip Tetlock, the famous predictor of "foxes and hedgehogs" fame.
From the Monkey Cage article:
How does the Good Judgment Project achieve such strikingly accurate results? The Project uses modern social-science methods ranging from harnessing the wisdom of crowds to prediction markets to putting together teams of forecasters. The GJP research team attributes its success to a blend of getting the right people (i.e., the “right” individual forecasters) on the bus, offering basic tutorials on inferential traps to avoid and best practices to embrace, concentrating the most talented forecasters into super teams, and constantly fine-tuning the aggregation algorithms it uses to combine individual forecasts into a collective prediction on each forecasting question. The Project’s best forecasters are typically talented and highly motivated amateurs, rather than subject matter experts.
But the good news is that you now have a chance to get involved with GJP Season 3 if you think you're a great predictor:
If you enjoy world politics and appreciate a good challenge, consider joining the Good Judgment Project, which has openings right now for Season 3 forecasters. The Project will give you the opportunity to receive training, to get regular feedback on your forecasting accuracy, and to test your forecasting skills against those of some of the most accurate forecasters around. Interested? To find out more and to register, go to www.goodjudgmentproject.com.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
This is the public group instrumental rationality diary for December 1-15.
It's a place to record and chat about it if you have done, or are actively doing, things like:
- Established a useful new habit
- Obtained new evidence that made you change your mind about some belief
- Decided to behave in a different way in some set of situations
- Optimized some part of a common routine or cached behavior
- Consciously changed your emotions or affect with respect to something
- Consciously pursued new valuable information about something that could make a big difference in your life
- Learned something new about your beliefs, behavior, or life that surprised you
- Tried doing any of the above and failed
Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves. Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.
Thanks to cata for starting the Group Rationality Diary posts, and to commenters for participating.
Immediate past diary: November 16-30
Discussion article for the meetup : London Practical Meetup - Calibration Training!
LessWrong London is having another meetup this Sunday (08/12) at 2:00 PM. We are meeting at our usual venue - The Shakespeare's Head by Holborn tube station and this is our first practical in a while.
The aim of the meetup will be improvement of our probability estimates methods by means of calibration. My current plan is to compile a list of statements and questions, check their answers, gamify the procedure and ask everyone to give Percentages and/or Confidence Intervals regarding their predictions of said statements and question. E.g. 'I assign a 70% chance that the statement 'There will be more people at the Calibration Training practical than at the Social Meetup the prior week " is true'. As I mentioned, I inted to gamify things at least a little bit and there will likely be different 'rounds' of question as well as 'winners'.
Note: you are not required to prepare in any way as everything will be explained during the meetup, however it will help if you arrive on time.
Reminder: The LW London Meetups are currently a weekly event - Every Sunday at 2:00 PM!
Discussion article for the meetup : London Practical Meetup - Calibration Training!
I felt like this draft paper by Anders Sandberg was a well-thought-out essay on the morality of experiments on brain emulations. Is there anything you disagree with here, or think he should handle differently?
Notes I took while listening to the speech:
If the human race is down to 1000 people, what are the odds that it will continue and do well? I realize this is a nitpick-- the argument would be the same if the human race were reduced to a million or ten million.
Suppose that a blind person in a first world country wants help paying for a guide dog and/or wants guide dogs for other blind people in first world countries, but has heard of effective altruism. What honest arguments could the blind person use?
If I were designing an intelligence, I'm not sure how much control I would give it over its own brain. People are already able to damage themselves pretty badly, even with the crude tools they've got. I would experiment with intelligent species to see how they'd behave with more control over their brains. What would you do?
Sidenote: Birds show some possibilities of making brains more efficient per weight.
TED talk about neurons and brains. This is not a great TED talk, but it's got somewhat about comparisons between brains in different species, in particular that neuron size and density varies between species. Comparisons of brain size tells you less than people assume.
Brains and competition aren't just about sexual selection: Females (especially) compete for resources to feed and care for themselves and their children. In some species, males also compete for resources for their children. Reproductive selection isn't just about mating selection. See Mother Nature by Sarah Hrdy. Interview about humans as cooperative breeders
Do we need to think about hardware, software, and firmware (at least) for brains, rather than just hardware and software?
[Sound cuts off at 38:00. comes back at 39:10]
How much of organisms consist of traits which aren't being selected for?
The sound quality deteriorates enough at about an hour that I'm giving up.
The title is the best name I could come up for a problem I have had for years, and have been waiting for someone else to come up with a solution.
There is a lot of awesome content on the web. Some of it is about events you could be at, right now, that you really want to be at, and could. If only you knew.
An example: I think Roger Waters is one of the most brilliant people alive, and I would like to witness every single concert of his, every time he is less than 100km away from me. Yet, I have only been to two of those, because I was only notified of those.
So I wish I could know if events I love are taking place. But I do not want to know about Meetups not even close to where I live. And I don't want to know at what time Roger went to the toilet, or if his T-shirt collection for groupies is out, or anything else that people responsible for his (hipothetical) rss feed or email list want me to buy.
Two questions are relevant here:
2) Do you know ways to get access to info about events, in particular of the following kinds that I happen to want to be notified? (in SF bay or in some city independent way)
- Ecstatic Dance
- Roger Waters, Deep Purple, Guns, Royksöpp, Evanescence, The Coors.
- Legacy and Vintage MTG
- Intellectual stars lectures
- CFAR/MIRI/Leverage/CEA/FHI/GWWC/80000k/IERFH/SENS/THINK etc... hosted events
- Crazy parties (crazy ranging over what would interest Iron Man's character or Jimmy Hendrix)
- Video Games Live (orchestra)
- Pop stars of the past - Psy, Britney, Backstreet, Madonna etc...
- Ultimate Frisbee
- Coursera courses
- Hiking expeditions
- Awesome nature documentaries (Life, Frozen Planet etc...)
Feel free to post your own interests in the comments.
Here is how I noticed the problem: Looking back into my life I began wondering what were the main determinants of whether I did or not go to some kinds of events. And again and again the result was "because I had a friend who used to tell me about that kind of thing back then".
Even now, most of what I do is basically determined by other people's tastes. It's simple. I've locked all possible advertisement away - I'm a serious anti-ad freak, it takes me less than half a second to switch radio stations if a person talks instead of music playing, and I block the front chair video away in airplanes in which it can't be turned off, I feel pain when any advertisement reaches my senses - but I did not block people away (yeah, I don't punch people's faces when they tell me about cool future events). So I'm left with the intersection between what interests me, and what interests them enough that they tell me about it.
This can't be right. The alternative, having to, as they say at MIT, drink from a fire hose, doesn't sound any good either.
One of the things people say to startup minded people is that they should start by noticing a need they have, something they'd be willing to pay for, and create something to satisfy that need. I'm usually not eager to pay for stuff, but here is something I'd pay for:
I'd be happy to pay $200 to someone who solved this problem somehow. Pointing an app, creating a system, summoning a submissive gnome... I don't mind. As long as there was a way for someone to get news of things they care about without having their brains stung by the atrocities of voracious marketeer capitalist addiction systems. And I don't think I'm the only anti-ad freak out there who'd pay some money for this, ADblock is, after all, the most used browser app in the world.
It is basically the reverse of the Groupon concept. Instead of stealing your attention to make you more interested in things you don't need and causing you to feel an emotional void for not having things while your pocket empties as well - yeah, I really don't like ads - the idea would be to inform you of things you already think you need, giving you a warm feeling inside of being served of all those delicious potential hedons you've been eagerly waiting to purchase.
I'm no entrepreneur, so who's up?
Discussion article for the meetup : Boston/Cambridge - The Attention Economy
Robin Gane-McCalla will be presenting on "The Attention Economy: how our focus determines the future".
Cambridge/Boston-area Less Wrong meetups are every Sunday at 2pm at Citadel (98 Elm St Apt 1 Somerville, near Porter Square).
Our default schedule is as follows:
—Phase 1: Arrival, greetings, unstructured conversation.
—Phase 2: The headline event. This starts promptly at 3pm, and lasts 30-60 minutes.
—Phase 3: Further discussion. We'll explore the ideas raised in phase 2, often in smaller groups.
—Phase 4: Dinner.
Discussion article for the meetup : Boston/Cambridge - The Attention Economy
Discussion article for the meetup : Bay Area Solstice
The Bay Area community is holding a Solstice celebration, and you’re invited! Join us for a night of group singing, ritual, light, warmth, and companionship, plus the first-ever performance of the rationalist choir, as we celebrate human progress and potential at the darkest time of the year.
The Bay Area Solstice will be held on Saturday, December 7, from 6:00 PM until 10:00 PM. We’ll provide a shuttle to and from the Civic Center BART station. Space is limited, so please fill out the RSVP form. I hope to see you there!
Discussion article for the meetup : Bay Area Solstice
Discussion article for the meetup : December Practical Rationality Meetup
This month we will be discussing and doing exercises to do with communication.
For full details see http://www.meetup.com/Melbourne-Less-Wrong/events/143167052/
Please RSVP at the above link.
Discussion article for the meetup : December Practical Rationality Meetup
This summary was posted to LW main on November 22nd. The following week's summary is here.
New meetups (or meetups with a hiatus of more than a year) are happening in:
- Amsterdam/Netherlands: 23 November 2013 02:00PM
- Jacksonville, FL: 24 November 2013 04:00PM
- Mumbai Meetup: 07 December 2013 03:00PM
Other irregularly scheduled Less Wrong meetups are taking place in:
- [Atlanta GA] November Meetup (Second of Two): 23 November 2013 06:00PM
- Berlin: 01 January 2019 01:30PM
- Frankfurt: 24 November 2013 02:00PM
- Moscow, Memory Tricks: 24 November 2013 04:00PM
- Saint-Petersburg: Game Event: 24 November 2013 04:00PM
- Saskatoon - Gauging the strength of evidence and Bayesian reasoning.: 23 November 2013 01:00PM
- [Tel Aviv] Less Wrong Israel Meetup (Tel Aviv): Quantum Computing: 28 November 2013 08:00PM
- Urbana-Champaign fun and games: 24 November 2013 02:00PM
- Brussels monthly meetup: time!: 14 December 2013 01:00PM
- London social meetup, 24/11/2013 [Back to the Shakespeare's Head]: 24 November 2013 10:38AM
- Washington DC fun and games meetup: 24 November 2013 03:00PM
Locations with regularly scheduled meetups: Austin, Berkeley, Brussels, Cambridge, MA, Cambridge UK, Columbus, London, Madison WI, Melbourne, Mountain View, New York, Philadelphia, Research Triangle NC, Salt Lake City, Seattle, Toronto, Vienna, Washington DC, Waterloo, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers.
Discussion article for the meetup : San Francisco / App Academy meetup [LOCATION CHANGE]
I've recently arrived in San Francisco for App Academy, and it turns out there are several other LessWrongers in the program. It's a cool group of people, including a guy who studied AIXI at ANU under Marcus Hutter. We talked it over and decided to organize our own meetup at Olivos, a restaurant that's within 20 minutes walking distance of the App Academy office. We'll be discussing Brian Tomasik's essay The Importance of Wild-Animal Suffering. Please read it ahead of time; it's short. The intent is for people to be able to get food and/or drinks if they want to, but it's not assumed that everyone will. RSVP's are appreciated so we can make a reservation, but we'll try to save a couple seats for any extra people who show up.
EDIT: After talking amonst ourselves, we decided to change the choice of restaurant.
Discussion article for the meetup : San Francisco / App Academy meetup [LOCATION CHANGE]
Discussion article for the meetup : West LA—A Conversation About Conversations
How to get in: Go to the Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge".
Parking is free for three hours, or for longer if you have ascended.
ＯＰＴＩＭＩＺＥ ＬＩＴＥＲＡＬＬＹ ＥＶＥＲＹＴＨＩＮＧ
—rejected t-shirt idea
We are going to talk about the way we talk about things. We are rationalists, and that means we make things, such as conversations, better than they are. When should we allow topics to drift? How do we determine who gets to speak, and when? How do we prevent useful technical discussions from decaying into talking about movies or the weather? What is the best topic? Why won't anyone listen to me? Where is everyone going? Come back!
- How to always have interesting conversations
- Having useful conversations
- How to have high-value conversations
- Wait Culture vs. Interrupt Culture
- A Human's Guide to Words
No prior knowledge of or exposure to Less Wrong is necessary; this will be generally accessible.
Discussion article for the meetup : West LA—A Conversation About Conversations
Discussion article for the meetup : Munich Meetup
Hi everybody! The next almost-monthly Munich meetup will be on Saturday, December 7th at 2 pm. We've got a book review planned, and otherwise just more or less structured discussion and maybe Zendo. Like last time, we'll meet at Gast near Rosenheimer Platz. We're always glad to see new people there.