Why you should attend EA Global and (some) other conferences
Many of you know about Effective Altruism and the associated community. It has a very significant overlap with LessWrong, and has been significantly influenced by the culture and ambitions of the community here.
One of the most important things happening in EA over the next few months is going to be EA Global, the so far biggest EA and Rationality community event to date, happening throughout the month of August in three different locations: Oxford, Melbourne and San Francisco (which is unfortunately already filled, despite us choosing the largest venue that Google had to offer).
The purpose of this post is to make a case for why it is a good idea for people to attend the event, and to serve as an information hub for information that might be more relevant to the LessWrong community (as well an additional place to ask questions). I am one of the main organizers and very happy to answer any questions that you have.
Is it a good idea to attend EA Global?
This is a difficult question, that obviously will not have a unique answer, but from the best of what I can tell, and for the majority of people reading this post, the answer seems to be "yes". The EA community has been quite successful at shaping the world to the better, and at building an epistemic community that seems to be effective at changing its mind and updating on evidence.
But there have been other people arguing in favor of supporting the EA movement, and I don't want to repeat everything that they said. Instead I want to focus on a more specific argument: "Given that I belief that EA is overall a promising movement, should I attend EA Global if I want to improve the world (according to my preferences)?"
The key question here is: Does attending the conference help the EA Movement succeed?
How attending EA Global helps the EA Movement succeed
It seems that the success of organizations is highly dependent on the interconnectedness of its members. In general a rule seems to hold: The better connected the social graph of your organization is, the more effective does it work.
In particular, any significant divide in an organization, any clustering of different groups that do not communicate much with each other, seems to significantly reduce the output the organization produces. I wish we had better studies on this, and that I could link to more sources for this, but everything I've found so far points in this direction. The fact that HR departments are willing to spend extremely large sums of money to encourage the employees of organizations to interact socially with each other, is definitely evidence for this being a good rule to follow (though far from conclusive).
What holds for most organizations should also hold for EA. If this is true, then the success of the EA Movement is significantly dependent on the interconnectedness of its members, both in the volume of its output and the quality of its output.
But EA is not a corporation, and EA does not share a large office together. If you would graph out the social graph of EA, it would very much look clustered. The Bay Area cluster, the Oxford cluster, the Rationality cluster, the East Coast and the West Coast cluster, many small clusters all over Europe with meetups and small social groups in different countries that have never talked to each other. EA is splintered into many groups, and if EA would be a company, the HR department would be very justified in spending a very significant chunk of resources at connecting those clusters as much as possible.
There are not many opportunities for us to increase the density of the EA social graph. There are other minor conferences, and online interactions do some part of the job, but the past EA summits where the main events at which people from different clusters of EA met each other for the first time. There they built lasting social connections, and actually caused these separate clusters in EA to be connected. This had a massive positive effect on the output of EA.
Examples:
- Ben Kuhn put me into contact with Ajeya Cotra, resulting in the two of us running a whole undergraduate class on Effective Altruism, that included Giving Games to various EA charities that was funded with over $10.000. (You can find documentation of the class here).
- The last EA summit resulted in both Tyler Alterman and Kerry Vaughan being hired by CEA and now being full time employees, who are significantly involved in helping CEA set up a branch in the US.
- The summit and retreat last year caused significant collaboration between CFAR, Leverage, CEA and FHI, resulting in multiple situations of these organizations helping each other in coordinating their fundraising attempts, hiring processes and navigating logistical difficulties.
This is going to be even more true this year. If we want EA to succeed and continue shaping the world towards the good, we want to have as many people come to the EA Global events as possible, and ideally from as many separate groups as possible. This means that you, especially if you feel somewhat disconnected from EA, seriously want to consider coming. I estimate the benefit of this to be much bigger than the cost of a plane ticket and the entrance ticket (~$500). If you do find yourself significantly constrained by financial resources, consider applying for financial aid, and we will very likely be able to arrange something for you. By coming, you provide a service to the EA community at large.
How do I attend EA Global?
As I said above, we are organizing three different events in three different locations: Oxford, Melbourne and San Francisco. We are particularly lacking representation from many different groups in mainland Europe, and it would be great if they could make it to Oxford. Oxford also has the most open spots and is going to be much bigger than the Melbourne event (300 vs. 100).
If you want to apply for Oxford go to: eaglobal.org/oxford
If you want to apply for Melbourne go to: eaglobal.org/melbourne
If you require financial aid, you will be able to put in a request after we've sent you an invitation.
Taking the reins at MIRI
Hi all. In a few hours I'll be taking over as executive director at MIRI. The LessWrong community has played a key role in MIRI's history, and I hope to retain and build your support as (with more and more people joining the global conversation about long-term AI risks & benefits) MIRI moves towards the mainstream.
Below I've cross-posted my introductory post on the MIRI blog, which went live a few hours ago. The short version is: there are very exciting times ahead, and I'm honored to be here. Many of you already know me in person or through my blog posts, but for those of you who want to get to know me better, I'll be running an AMA on the effective altruism forum at 3PM Pacific on Thursday June 11th.
I extend to all of you my thanks and appreciation for the support that so many members of this community have given to MIRI throughout the years.
16 types of useful predictions
How often do you make predictions (either about future events, or about information that you don't yet have)? If you're a regular Less Wrong reader you're probably familiar with the idea that you should make your beliefs pay rent by saying, "Here's what I expect to see if my belief is correct, and here's how confident I am," and that you should then update your beliefs accordingly, depending on how your predictions turn out.
And yet… my impression is that few of us actually make predictions on a regular basis. Certainly, for me, there has always been a gap between how useful I think predictions are, in theory, and how often I make them.
I don't think this is just laziness. I think it's simply not a trivial task to find predictions to make that will help you improve your models of a domain you care about.
At this point I should clarify that there are two main goals predictions can help with:
- Improved Calibration (e.g., realizing that I'm only correct about Domain X 70% of the time, not 90% of the time as I had mistakenly thought).
- Improved Accuracy (e.g., going from being correct in Domain X 70% of the time to being correct 90% of the time)
If your goal is just to become better calibrated in general, it doesn't much matter what kinds of predictions you make. So calibration exercises typically grab questions with easily obtainable answers, like "How tall is Mount Everest?" or "Will Don Draper die before the end of Mad Men?" See, for example, the Credence Game, Prediction Book, and this recent post. And calibration training really does work.
But even though making predictions about trivia will improve my general calibration skill, it won't help me improve my models of the world. That is, it won't help me become more accurate, at least not in any domains I care about. If I answer a lot of questions about the heights of mountains, I might become more accurate about that topic, but that's not very helpful to me.
So I think the difficulty in prediction-making is this: The set {questions whose answers you can easily look up, or otherwise obtain} is a small subset of all possible questions. And the set {questions whose answers I care about} is also a small subset of all possible questions. And the intersection between those two subsets is much smaller still, and not easily identifiable. As a result, prediction-making tends to seem too effortful, or not fruitful enough to justify the effort it requires.

But the intersection's not empty. It just requires some strategic thought to determine which answerable questions have some bearing on issues you care about, or -- approaching the problem from the opposite direction -- how to take issues you care about and turn them into answerable questions.
I've been making a concerted effort to hunt for members of that intersection. Here are 16 types of predictions that I personally use to improve my judgment on issues I care about. (I'm sure there are plenty more, though, and hope you'll share your own as well.)
- Predict how long a task will take you. This one's a given, considering how common and impactful the planning fallacy is.
Examples: "How long will it take to write this blog post?" "How long until our company's profitable?" - Predict how you'll feel in an upcoming situation. Affective forecasting – our ability to predict how we'll feel – has some well known flaws.
Examples: "How much will I enjoy this party?" "Will I feel better if I leave the house?" "If I don't get this job, will I still feel bad about it two weeks later?" - Predict your performance on a task or goal.
One thing this helps me notice is when I've been trying the same kind of approach repeatedly without success. Even just the act of making the prediction can spark the realization that I need a better game plan.
Examples: "Will I stick to my workout plan for at least a month?" "How well will this event I'm organizing go?" "How much work will I get done today?" "Can I successfully convince Bob of my opinion on this issue?" - Predict how your audience will react to a particular social media post (on Facebook, Twitter, Tumblr, a blog, etc.).
This is a good way to hone your judgment about how to create successful content, as well as your understanding of your friends' (or readers') personalities and worldviews.
Examples: "Will this video get an unusually high number of likes?" "Will linking to this article spark a fight in the comments?" - When you try a new activity or technique, predict how much value you'll get out of it.
I've noticed I tend to be inaccurate in both directions in this domain. There are certain kinds of life hacks I feel sure are going to solve all my problems (and they rarely do). Conversely, I am overly skeptical of activities that are outside my comfort zone, and often end up pleasantly surprised once I try them.
Examples: "How much will Pomodoros boost my productivity?" "How much will I enjoy swing dancing?" - When you make a purchase, predict how much value you'll get out of it.
Research on money and happiness shows two main things: (1) as a general rule, money doesn't buy happiness, but also that (2) there are a bunch of exceptions to this rule. So there seems to be lots of potential to improve your prediction skill here, and spend your money more effectively than the average person.
Examples: "How much will I wear these new shoes?" "How often will I use my club membership?" "In two months, will I think it was worth it to have repainted the kitchen?" "In two months, will I feel that I'm still getting pleasure from my new car?" - Predict how someone will answer a question about themselves.
I often notice assumptions I'm been making about other people, and I like to check those assumptions when I can. Ideally I get interesting feedback both about the object-level question, and about my overall model of the person.
Examples: "Does it bother you when our meetings run over the scheduled time?" "Did you consider yourself popular in high school?" "Do you think it's okay to lie in order to protect someone's feelings?" - Predict how much progress you can make on a problem in five minutes.
I often have the impression that a problem is intractable, or that I've already worked on it and have considered all of the obvious solutions. But then when I decide (or when someone prompts me) to sit down and brainstorm for five minutes, I am surprised to come away with a promising new approach to the problem.
Example: "I feel like I've tried everything to fix my sleep, and nothing works. If I sit down now and spend five minutes thinking, will I be able to generate at least one new idea that's promising enough to try?" - Predict whether the data in your memory supports your impression.
Memory is awfully fallible, and I have been surprised at how often I am unable to generate specific examples to support a confident impression of mine (or how often the specific examples I generate actually contradict my impression).
Examples: "I have the impression that people who leave academia tend to be glad they did. If I try to list a bunch of the people I know who left academia, and how happy they are, what will the approximate ratio of happy/unhappy people be?"
"It feels like Bob never takes my advice. If I sit down and try to think of examples of Bob taking my advice, how many will I be able to come up with?" - Pick one expert source and predict how they will answer a question.
This is a quick shortcut to testing a claim or settling a dispute.
Examples: "Will Cochrane Medical support the claim that Vitamin D promotes hair growth?" "Will Bob, who has run several companies like ours, agree that our starting salary is too low?" - When you meet someone new, take note of your first impressions of him. Predict how likely it is that, once you've gotten to know him better, you will consider your first impressions of him to have been accurate.
A variant of this one, suggested to me by CFAR alum Lauren Lee, is to make predictions about someone before you meet him, based on what you know about him ahead of time.
Examples: "All I know about this guy I'm about to meet is that he's a banker; I'm moderately confident that he'll seem cocky." "Based on the one conversation I've had with Lisa, she seems really insightful – I predict that I'll still have that impression of her once I know her better." - Predict how your Facebook friends will respond to a poll.
Examples: I often post social etiquette questions on Facebook. For example, I recently did a poll asking, "If a conversation is going awkwardly, does it make things better or worse for the other person to comment on the awkwardness?" I confidently predicted most people would say "worse," and I was wrong. - Predict how well you understand someone's position by trying to paraphrase it back to him.
The illusion of transparency is pernicious.
Examples: "You said you think running a workshop next month is a bad idea; I'm guessing you think that's because we don't have enough time to advertise, is that correct?"
"I know you think eating meat is morally unproblematic; is that because you think that animals don't suffer?" - When you have a disagreement with someone, predict how likely it is that a neutral third party will side with you after the issue is explained to her.
For best results, don't reveal which of you is on which side when you're explaining the issue to your arbiter.
Example: "So, at work today, Bob and I disagreed about whether it's appropriate for interns to attend hiring meetings; what do you think?" - Predict whether a surprising piece of news will turn out to be true.
This is a good way to hone your bullshit detector and improve your overall "common sense" models of the world.
Examples: "This headline says some scientists uploaded a worm's brain -- after I read the article, will the headline seem like an accurate representation of what really happened?"
"This viral video purports to show strangers being prompted to kiss; will it turn out to have been staged?" - Predict whether a quick online search will turn up any credible sources supporting a particular claim.
Example: "Bob says that watches always stop working shortly after he puts them on – if I spend a few minutes searching online, will I be able to find any credible sources saying that this is a real phenomenon?"
I have one additional, general thought on how to get the most out of predictions:
Rationalists tend to focus on the importance of objective metrics. And as you may have noticed, a lot of the examples I listed above fail that criterion. For example, "Predict whether a fight will break out in the comments? Well, there's no objective way to say whether something officially counts as a 'fight' or not…" Or, "Predict whether I'll be able to find credible sources supporting X? Well, who's to say what a credible source is, and what counts as 'supporting' X?"
And indeed, objective metrics are preferable, all else equal. But all else isn't equal. Subjective metrics are much easier to generate, and they're far from useless. Most of the time it will be clear enough, once you see the results, whether your prediction basically came true or not -- even if you haven't pinned down a precise, objectively measurable success criterion ahead of time. Usually the result will be a common sense "yes," or a common sense "no." And sometimes it'll be "um...sort of?", but that can be an interestingly surprising result too, if you had strongly predicted the results would point clearly one way or the other.
Along similar lines, I usually don't assign numerical probabilities to my predictions. I just take note of where my confidence falls on a qualitative "very confident," "pretty confident," "weakly confident" scale (which might correspond to something like 90%/75%/60% probabilities, if I had to put numbers on it).
There's probably some additional value you can extract by writing down quantitative confidence levels, and by devising objective metrics that are impossible to game, rather than just relying on your subjective impressions. But in most cases I don't think that additional value is worth the cost you incur from turning predictions into an onerous task. In other words, don't let the perfect be the enemy of the good. Or in other other words: the biggest problem with your predictions right now is that they don't exist.
New forum for MIRI research: Intelligent Agent Foundations Forum
Today, the Machine Intelligence Research Institute is launching a new forum for research discussion: the Intelligent Agent Foundations Forum! It's already been seeded with a bunch of new work on MIRI topics from the last few months.
We've covered most of the (what, why, how) subjects on the forum's new welcome post and the How to Contribute page, but this post is an easy place to comment if you have further questions (or if, maths forbid, there are technical issues with the forum instead of on it).
But before that, go ahead and check it out!
(Major thanks to Benja Fallenstein, Alice Monday, and Elliott Jin for their work on the forum code, and to all the contributors so far!)
EDIT 3/22: Jessica Taylor, Benja Fallenstein, and I wrote forum digest posts summarizing and linking to recent work (on the IAFF and elsewhere) on reflective oracle machines, on corrigibility, utility indifference, and related control ideas, and on updateless decision theory and the logic of provability, respectively! These are pretty excellent resources for reading up on those topics, in my biased opinion.
Rationality: From AI to Zombies
Eliezer Yudkowsky's original Sequences have been edited, reordered, and converted into an ebook!
Rationality: From AI to Zombies is now available in PDF, EPUB, and MOBI versions on intelligence.org (link). You can choose your own price to pay for it (minimum $0.00), or buy it for $4.99 from Amazon (link). The contents are:
- 333 essays from Eliezer's 2006-2009 writings on Overcoming Bias and Less Wrong, including 58 posts that were not originally included in a named sequence.
- 5 supplemental essays from yudkowsky.net, written between 2003 and 2008.
- 6 new introductions by me, spaced throughout the book, plus a short preface by Eliezer.
The ebook's release has been timed to coincide with the end of Eliezer's other well-known introduction to rationality, Harry Potter and the Methods of Rationality. The two share many similar themes, and although Rationality: From AI to Zombies is (mostly) nonfiction, it is decidedly unconventional nonfiction, freely drifting in style from cryptic allegory to personal vignette to impassioned manifesto.
The 333 posts have been reorganized into twenty-six sequences, lettered A through Z. In order, these are titled:
- A — Predictably Wrong
- B — Fake Beliefs
- C — Noticing Confusion
- D — Mysterious Answers
- E — Overly Convenient Excuses
- F — Politics and Rationality
- G — Against Rationalization
- H — Against Doublethink
- I — Seeing with Fresh Eyes
- J — Death Spirals
- K — Letting Go
- L — The Simple Math of Evolution
- M — Fragile Purposes
- N — A Human's Guide to Words
- O — Lawful Truth
- P — Reductionism 101
- Q — Joy in the Merely Real
- R — Physicalism 201
- S — Quantum Physics and Many Worlds
- T — Science and Rationality
- U — Fake Preferences
- V — Value Theory
- W — Quantified Humanism
- X — Yudkowsky's Coming of Age
- Y — Challenging the Difficult
- Z — The Craft and the Community
Several sequences and posts have been renamed, so you'll need to consult the ebook's table of contents to spot all the correspondences. Four of these sequences (marked in bold) are almost completely new. They were written at the same time as Eliezer's other Overcoming Bias posts, but were never ordered or grouped together. Some of the others (A, C, L, S, V, Y, Z) have been substantially expanded, shrunk, or rearranged, but are still based largely on old content from the Sequences.
One of the most common complaints about the old Sequences was that there was no canonical default order, especially for people who didn't want to read the entire blog archive chronologically. Despite being called "sequences," their structure looked more like a complicated, looping web than like a line. With Rationality: From AI to Zombies, it will still be possible to hop back and forth between different parts of the book, but this will no longer be required for basic comprehension. The contents have been reviewed for consistency and in-context continuity, so that they can genuinely be read in sequence. You can simply read the book as a book.
I have also created a community-edited Glossary for Rationality: From AI to Zombies. You're invited to improve on the definitions and explanations there, and add new ones if you think of any while reading. When we release print versions of the ebook (as a six-volume set), a future version of the Glossary will probably be included.
HPMOR Wrap Parties: Resources, Information and Discussion
Harry Potter and the Methods of Rationality - Wrap Party Summary Thread
As many of you probably read on the HPMOR author's note last month, I am the coordinator of the HPMOR Wrap parties. Many of you have reached out to me, I put hundreds of you into contact with each other, and over 20 parties on 4 continents are now going to happen. Now it is time to get as much attendance to the events as possible, make sure that we all get the most out of the events and use the momentum that HPMOR has brought this community. This post will serve as a central location for all information and resources available for the parties, as well as a place for discussion in the comments.
Information
I set up a few different systems to coordinate everyone, and make it easier for everyone interested in the wrap parties to connect. Here they are:
The Map:
This map can help you get a quick overview of how many people in your area are strongly interested, and who might help you with organizing an event. Remember that not even half of the people currently RSVP'd for Facebook events have added themselves to the map, so this map is the absolute minimum level of engagement in your area. I will be adding all events to the map as they are posted in the Facebook group. Please add yourself to the map if you can! (But please be careful to not destroy the pins of anyone else, to use the correct pin type, and to not create any empty pins.)
The Facebook Group:
This is the main location for discussion of the wrap parties and also the location at which all of the events are conveniently collected. You can find all events under the "Events" tab, and if you add your own event in this group you can conveniently invite everyone who has added themselves to this group. I would still additionally advice you to invite all of your friends who might be interested, since they might not have joined the group.
The Organizer Mailing List:
This mailing list is the fastest way for me to reach all of the organizers at the same time, and also the fastest way for all organizers to be kept up to speed with the newest resources available. Use this mailing list to discuss ideas and get help from other organizers.
Parties [Updated: March 9th, 10:00PM]
To help you quickly get a sense of whether there is a party happening in your area, here is a list of all the parties that I have so far learned about, with links to their respective Facebook events. Currently everything is on Facebook because that is much easier to coordinate, but I will try to add contact information for organizers for all of these parties very soon, so that people without Facebook can easily find the information that they need:
Parties in Asia:
- Singapore
- Mumbai, India
- Colombo, Sri Lanka
- Herzelia, Israel
- Kharagpur, India
- New Delhi, India
- Bangalore, India
- Bangalore, India Nr. 2
Parties in Australia:
Parties in Europe:
- London, United Kingdom
- Sheffield, United Kingdom
- Brussels, Belgium
- Krakow, Poland
- Cambridge, United Kingdom
- Belgrade, Serbia
- Berlin, Germany
- Turku, Finland
- Madrid, Spain
- Ireland, Dublin
- Germany, Cologne [marcel_mueller@mail.de]
- Copenhagen, Denmark
- St. Petersburg, Russia
- Warsaw, Poland
- Dnipropetrovsk, Ukraine
Parties in North America:
- Berkeley, California
- Mountain View, California
- Phoenix, Arizona
- Washington DC
- Portland, Oregon
- New Orleans, Louisiana
- Sarasota, Florida
- Gainesville, Florida
- Denver, Colorado
- Fort Collins, Colorado
- Lawrence, Kansas
- Seattle, Washington
- MIT, Massachusetts [14th of March]
- Cambridge, Massachusetts [15th of March]
- New York, New York
- Chicago, Illinois
- Middleton, Wisconsin
- Charlotte, North Carolina
- Ferndale, Michigan
- Pittsburgh, Pennsylvania
- Mexico City, Mexico
- Toronto, Canada
- Austin, Texas
- Atlanta, Georgia
- Salt Lake City, Utah
- Albuquerque, New Mexico
- Waterloo, Canada
Resources
Handbook:
To help everyone get their party started, Brayden McLean compiled a wonderful handbook for party organizers:
https://docs.google.com/document/d/1Ya34ACL9J9Amch-4NSDnZ1idmSFNswWk9wjdt5B1098/edit?usp=sharing
Free Books:
We are providing free copies of the first 17 chapters of HPMOR to all parties in the U.S.! Just fill out this form today or tomorrow, and we will try to send you as many copies as you think you will need to hook all of your friends.
https://docs.google.com/forms/d/1mM-jgiy9teaINEED0WvXCCEZD12FP1m7Qn6vnYLZ4kM/viewform
The Party Spreadsheet:
I compiled a spreadsheet with all of the parties that I've gotten to know of so far. This will hopefully help people without Facebook get into contact with the organizers of the closest party, and generally make information easier available. Commenting is enabled, so if you are one of the organizers and want any of the information changed, please leave a comment on the spreadsheet and I will change it as quickly as possible.
https://docs.google.com/spreadsheets/d/12fPKHtZxkK5aWnLQfttZFMrWkGfMGxmperXUxSK0I5w/edit?usp=sharing
Call for Stories:
I want to read a few HPMOR stories at the Berkeley event, and also just generally allow people to share how HPMOR has affected their lives. For that, we have the Call for Stories document, which allows people to write their own HPMOR stories and share them with the world.
I will continue keeping this post updated with all valuable material that is sent to me.
Rationality Quotes Thread March 2015
Another month, another rationality quotes thread. The rules are:
- Please post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
- Do not quote yourself.
- Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
- No more than 5 quotes per person per monthly thread, please.
- Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
Announcing the Complice Less Wrong Study Hall
(If you're familiar with the backstory of the LWSH, you can skip to paragraph 5. If you just want the link to the chat, click here: LWSH on Complice)
The Less Wrong Study Hall was created as a tinychat room in March 2013, following Mqrius and ShannonFriedman's desire to create a virtual context for productivity. In retrospect, I think it's hilarious that a bunch of the comments ended up being a discussion of whether LW had the numbers to get a room that consistently had someone in it. The funny part is that they were based around the assumption that people would spend about 1h/day in it.
Once it was created, it was so effective that people started spending their entire day doing pomodoros (with 32minsWork+8minsBreak) in the LWSH and now often even stay logged in while doing chores away from their computers, just for cadence of focus and the sense of company. So there's almost always someone there, and often 5-10 people.
A week in, a call was put out for volunteers to program a replacement for the much-maligned tinychat. As it turns out though, video chat is a hard problem.
So nearly 2 years later, people are still using the tinychat.
But a few weeks ago, I discovered that you can embed the tinychat applet into an arbitrary page. I immediately set out to integrate LWSH into Complice, the productivity app I've been building for over a year, which counts many rationalists among its alpha & beta users.
The focal point of Complice is its today page, which consists of a list of everything you're planning to accomplish that day, colorized by goal. Plus a pomodoro timer. My habit for a long time has been to have this open next to LWSH. So what I basically did was integrate these two pages. On the left, you have a list of your own tasks. On the right, a list of other users in the room, with whatever task they're doing next. Then below all of that, the chatroom.
(Something important to note: I'm not planning to point existing Complice users, who may not be LWers, at the LW Study Hall. Any Complice user can create their own coworking room by going to complice.co/createroom)
With this integration, I've solved many of the core problems that people wanted addressed for the study hall:
- an actual ding sound beyond people typing in the chat
- synchronized pomodoro time visibility
- pomos that automatically start, so breaks don't run over
- Intentions — what am I working on this pomo?
- a list of what other users are working on
- the ability to show off how many pomos you've done
- better welcoming & explanation of group norms
There are a couple other requested features that I can definitely solve but decided could come after this launch:
- rooms with different pomodoro durations
- member profiles
- the ability to precommit to showing up at a certain time (maybe through Beeminder?!)
The following points were brought up in the Programming the LW Study Hall post or on the List of desired features on the github/nnmm/lwsh wiki, but can't be fixed without replacing tinychat:
- efficient with respect to bandwidth and CPU
- page layout with videos lined up down the left for use on the side of monitors
- chat history
- encryption
- everything else that generally sucks about tinychat
It's also worth noting that if you were to think of the entirety of Complice as an addition to LWSH... well, it would definitely look like feature creep, but at any rate there would be several other notable improvements:
- daily emails prompting you to decide what you're going to do that day
- a historical record of what you've done, with guided weekly, monthly, and yearly reviews
- optional accountability partner who gets emails with what you've done every day (the LWSH might be a great place to find partners!)
(This article posted to Main because that's where the rest of the LWSH posts are, and this represents a substantial update.)
Attempted Telekinesis
Related to: Compartmentalization in epistemic and instrumental rationality; That other kind of status.
The Importance of Sidekicks
[Reposted from my personal blog.]
Mindspace is wide and deep. “People are different” is a truism, but even knowing this, it’s still easy to underestimate.
I spent much of my initial engagement with the rationality community feeling weird and different. I appreciated the principle and project of rationality as things that were deeply important to me; I was pretty pro-self improvement, and kept tsuyoku naritai as my motto for several years. But the rationality community, the people who shared this interest of mine, often seemed baffled by my values and desires. I wasn’t ambitious, and had a hard time wanting to be. I had a hard time wanting to be anything other than a nurse.
It wasn’t until this August that I convinced myself that this wasn’t a failure in my rationality, but rather a difference in my basic drives. It’s around then, in the aftermath of the 2014 CFAR alumni reunion, that I wrote the following post.
I don’t believe in life-changing insights (that happen to me), but I think I’ve had one–it’s been two weeks and I’m still thinking about it, thus it seems fairly safe to say I did.
At a CFAR Monday test session, Anna was talking about the idea of having an “aura of destiny”–it’s hard to fully convey what she meant and I’m not sure I get it fully, but something like seeing yourself as you’ll be in 25 years once you’ve saved the world and accomplished a ton of awesome things. She added that your aura of destiny had to be in line with your sense of personal aesthetic, to feel “you.”
I mentioned to Kenzi that I felt stuck on this because I was pretty sure that the combination of ambition and being the locus of control that “aura of destiny” conveyed to me was against my sense of personal aesthetic.
Kenzi said, approximately [I don't remember her exact words]: “What if your aura of destiny didn’t have to be those things? What if you could be like…Samwise, from Lord of the Rings? You’re competent, but most importantly, you’re *loyal* to Frodo. You’re the reason that the hero succeeds.”
I guess this isn’t true for most people–Kenzi said she didn’t want to keep thinking of other characters who were like this because she would get so insulted if someone kept comparing her to people’s sidekicks–but it feels like now I know what I am.
So. I’m Samwise. If you earn my loyalty, by convincing me that what you’re working on is valuable and that you’re the person who should be doing it, I’ll stick by you whatever it takes, and I’ll *make sure* you succeed. I don’t have a Frodo right now. But I’m looking for one.
It then turned out that quite a lot of other people recognized this, so I shifted from “this is a weird thing about me” to “this is one basic personality type, out of many.” Notably, Brienne wrote the following comment:
Sidekick” doesn’t *quite* fit my aesthetic, but it’s extremely close, and I feel it in certain moods. Most of the time, I think of myself more as what TV tropes would call a “dragon”. Like the Witch-king of Angmar, if we’re sticking of LOTR. Or Bellatrix Black. Or Darth Vader. (It’s not my fault people aren’t willing to give the good guys dragons in literature.)
For me, finding someone who shared my values, who was smart and rational enough for me to trust him, and who was in a much better position to actually accomplish what I most cared about than I imagined myself ever being, was the best thing that could have happened to me.
She also gave me what’s maybe one of the best and most moving compliments I’ve ever received.
In Australia, something about the way you interacted with people suggested to me that you help people in a completely free way, joyfully, because it fulfills you to serve those you care about, and not because you want something from them… I was able to relax around you, and ask for your support when I needed it while I worked on my classes. It was really lovely… The other surprising thing was that you seemed to act that way with everyone. You weren’t “on” all the time, but when you were, everybody around you got the benefit. I’d never recognized in anyone I’d met a more diffuse service impulse, like the whole human race might be your master. So I suddenly felt like I understood nurses and other people in similar service roles for the first time.
Sarah Constantin, who according to a mutual friend is one of the most loyal people who exists, chimed in with some nuance to the Frodo/Samwise dynamic: “Sam isn’t blindly loyal to Frodo. He makes sure the mission succeeds even when Frodo is fucking it up. He stands up to Frodo. And that’s important too.”
Kate Donovan, who also seems to share this basic psychological makeup, added “I have a strong preference for making the lives of the lead heroes better, and very little interest in ever being one.”
Meanwhile, there were doubts from others who didn’t feel this way. The “we need heroes, the world needs heroes” narrative is especially strong in the rationalist community. And typical mind fallacy abounds. It seems easy to assume that if someone wants to be a support character, it’s because they’re insecure–that really, if they believed in themselves, they would aim for protagonist.
I don’t think this is true. As Kenzi pointed out: “The other thing I felt like was important about Samwise is that his self-efficacy around his particular mission wasn’t a detriment to his aura of destiny – he did have insecurities around his ability to do this thing – to stand by Frodo – but even if he’d somehow not had them, he still would have been Samwise – like that kind of self-efficacy would have made his essence *more* distilled, not less.”
Brienne added: “Becoming the hero would be a personal tragedy, even though it would be a triumph for the world if it happened because I surpassed him, or discovered he was fundamentally wrong.”
Why write this post?
Usually, “this is a true and interesting thing about humans” is enough of a reason for me to write something. But I’ve got a lot of other reasons, this time.
I suspect that the rationality community, with its “hero” focus, drives away many people who are like me in this sense. I’ve thought about walking away from it, for basically that reason. I could stay in Ottawa and be a nurse for forty years; it would fulfil all my most basic emotional needs, and no one would try to change me. Because oh boy, have people tried to do that. It’s really hard to be someone who just wants to please others, and to be told, basically, that you’re not good enough–and that you owe it to the world to turn yourself ambitious, strategic, Slytherin.
Firstly, this is mean regardless. Secondly, it’s not true.
Samwise was important. So was Frodo, of course. But Frodo needed Samwise. Heroes need sidekicks. They can function without them, but function a lot better with them. Maybe it’s true that there aren’t enough heroes trying to save the world. But there sure as hell aren’t enough sidekicks trying to help them. And there especially aren’t enough talented, competent, awesome sidekicks.
If you’re reading this post, and it resonates with you… Especially if you’re someone who has felt unappreciated and alienated for being different… I have something to tell you. You count. You. Fucking. Count. You’re needed, even if the heroes don’t realize it yet. (Seriously, heroes, you should be more strategic about looking for awesome sidekicks. AFAIK only Nick Bostrom is doing it.) This community could use more of you. Pretty much every community could use more of you.
I’d like, someday, to live in a culture that doesn’t shame this way of being. As Brienne points out, “Society likes *selfless* people, who help everybody equally, sure. It’s socially acceptable to be a nurse, for example. Complete loyalty and devotion to “the hero”, though, makes people think of brainwashing, and I’m not sure what else exactly but bad things.” (And not all subsets of society even accept nursing as a Valid Life Choice.) I’d like to live in a world where an aspiring Samwise can find role models; where he sees awesome, successful people and can say, “yes, I want to grow up to be that.”
Maybe I can’t have that world right away. But at least I know what I’m reaching for. I have a name for it. And I have a Frodo–Ruby and I are going to be working together from here on out. I have a reason not to walk away.
View more: Next
= 783df68a0f980790206b9ea87794c5b6)


Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)