Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
LessWrong seems to be a big fan of spaced-repetition flashcard programs like Anki, Supermemo, or Mnemosyne. I used to be. After using them religiously for 3 years in medical school, I now categorically advise against using them for large volumes of memorization.
[A caveat before people get upset: I think they appropriate in certain situations, and I have not tried to use them to learn a language, which seems its most popular use. More at the bottom.]
A bit more history: I and 30 other students tried using Mnemosyne (and some used Anki) for multiple tests. At my school, we have a test approximately every 3 weeks, and each test covers about 75 pages of high-density outline-format notes. Many stopped after 5 or so such tests, citing that they simply did not get enough returns from their time. I stuck with it longer and used them more than anyone else, using them for 3 years.
Incidentally, I failed my first year and had to repeat.
By the end of that third year (and studying for my Step 1 boards, a several-month process), I lost faith in spaced-repetition cards as an effective tool for my memorization demands. I later met with a learning-skills specialist, who felt the same way, and had better reasons than my intuition/trial-and-error:
- Flashcards are less useful to learning the “big picture”
- Specifically, if you are memorizing a large amount of information, there is often a hierarchy, organization, etc that can make leaning the whole thing easier, and you loose the constant visual reminder of the larger context when using flashcards.
- Flashcards do not take advantage of spatial, mapping, or visual memory, all of which the human mind is much better optimized for. It is not so well built to memorize pairs between seemingly arbitrary concepts with few to no intuitive links. My preferred methods are, in essence, hacks that use your visual and spatial memory rather than rote.
Here are examples of the typical kind of things I memorize every day and have found flashcards to be surprisingly worthless for:
- The definition of Sjögren's syndrome
- The contraindications of Metronidazole
- The significance of a rise in serum αFP
Here is what I now use in place of flashcards:
- Ven diagrams/etc, to compare and contrast similar lists. (This is more specific to medical school, when you learn subtly different diseases.)
- Mnemonic pictures. I have used this myself for years to great effect, and later learned it was taught by my study-skills expert, though I'm surprised I haven't found them formally named and taught anywhere else. The basic concept is to make a large picture, where each detail on the picture corresponds to a detail you want to memorize.
- Memory palaces. I recently learned how to properly use these, and I'm a true believer. When I only had the general idea to “pair things you want to memorize with places in your room” I found it worthless, but after I was taught a lot of do's and don'ts, they're now my favorite way to memorize any list of 5+ items. If there's enough demand on LW I can write up a summary.
Spaced repetition is still good for knowledge you need to retrieve immediately, when a 2-second delay would make it useless. I would still consider spaced-repetition to memorize some of the more rarely-used notes on the treble and bass clef, if I ever decide to learn to sight-read music properly. I make no comment on it's usefulness to learn a foreign language, as I haven't tried it, but if I were to pick one up I personally would start with a rosetta-stone-esque program.
Your mileage may vary, but after seeing so many people try and reject them, I figured it was enough data to share. Mnemonic pictures and memory palaces are slightly time consuming when you're learning them. However, if someone has the motivation and discipline to make a stack of flashcards and study them every day indefinitely, then I believe learning and using those skills is a far better use of time.
Followup to: Lifestyle interventions to increase longevity.
What does it mean for exercise to be optimal?
- Optimal for looks
- Optimal for time
- Optimal for effort
- Optimal for performance
- Optimal for longevity
There may be even more criteria.
We're all likely going for a mix of outcomes, and optimal exercise is going to change depending on your weighting of different factors. So I'm going to discuss something close to a minimum viable routine based on meta-analyses of exercise studies.
Not knowing which sort of exercise yields the best results gives our brains an excuse to stop thinking about it. The intent of this post is to go over the dose responses to various types of exercise. We’re going to break through vague notions like “exercise is good” and “I should probably exercise more” with a concrete plan where you understand the relevant parameters that will cause dramatic improvements.
Does the surveillance state affect us? It has affected me, and I didn't realize that it was affecting me until recently. I give a few examples of how it has affected me:
- I was once engaged in a discussion on Facebook about Obama's foreign policy. Around that time, I was going to apply for a US visa. I stopped the discussion early. Semi-consciously, I was worried that what I was writing would be checked by US visa officials and would lead to my visa being denied.
- I was once really interested in reading up on the Unabomber and his manifesto, because somebody mentioned that he had some interesting ideas, and though fundamentally misguided, he might have been onto something. I didn't explore much because I was worried---again semi-consciously---that my traffic history would be logged on some NSA computer somewhere, and that I'd pattern match to the Unabomber (I'm a physics grad student, the Unabomber was a mathematician).
- I didn't visit Silk Road as I was worried that my visits would be traced, even though I had no plans of buying anything.
- Just generally, I try to not search for some really weird stuff that I want to search for (I'm a curious guy!).
- I was almost not going to write this post.
After moving in with my new roomies (Danny and Bethany of Beeminder), I discovered they have a fair and useful way of auctioning off joint decisions. It helps you figure out how much you value certain chores or activities, and it guarantees that these decisions are worked out in a fair way. They call it "yootling", and wrote more about it here.
A quick example (Note: this only works if all participants are of the types of people who consider this sort of thing a Good Idea, and not A Grotesque Parody of Caring or whatnot):
Use Case: Who Picks up the Kids from Grandma's?
D and B are both busy working, but it's time to pick up the kids from their grandparents house. They decide to yootle for it.
B bids $100 (In a regular Normal Person exchange, this would be like saying "I'm elbows deep in code right now, and don't want to break flow. I'd really rather continue working right now, but of course I'll go if it's needed.")
D bids $15 (In a regular Normal Person exchange this would be like saying "I don't mind too much, though I do have other things to do now...")
So D "wins" the bid, and B pays him $15 to go get the kids from their grandma's.
Of course.... it would be a pain in the butt to constantly be paying each other, so instead they have a 10% chance of paying 10x the amount, and a 90% chance to pay nothing, using a random number generator.
This is made easier by the fact that we have a bot to run this, but before that they would use the high-tech solution of Holding Up Fingers.
We may do this multiple times per day, whenever there’s a good that we have shared ownership of and one of us wants to offload their shares onto the other person. The goods can be anything, e.g. the last brownie, but they’re more often “bads” like who will get up in the middle of the night with a vomiting child, or who will book plane tickets for a trip.We find this an elegant means of assigning loathed tasks. The person who minded least winds up doing the chore, but gets compensated for it at a price that by their own estimation was fair.
Joint purchase auctionThe decision auction and variants are about allocating shared or partially shared resources to one person or the other, or picking one person to do something. Once in a while you have the opposite problem: deciding on a joint purchase.Suppose Danny thinks we need a new sofa (this is very hypothetical). I think the one we have is just fine thank you. After some discussion I concede that it would be nice to have a sofa that was less doggy. Danny, being terribly excited about getting a new sofa does a bunch of research and finds his ideal sofa. I think it is a bit overpriced considering it is going to be a piece of gymnastics equipment for the kids for the next 6 years. Conflict ensues! I could bluff that I’m not interested in a new sofa at all and that he can buy it himself if he wants it that badly. But he probably doesn’t want it that bad, and I do want it a little. If only we could buy the sofa conditional on our combined utility for it exceeding the cost, and pay in proportion to our utilities to boot. Well, thanks to separate finances and the magic of mechanism design, we can! We submit sealed bids for the sofa and buy it if the sum of our bids is enough. (And, importantly, commit to not buying it for at least a year otherwise.) Any surplus is redistributed in proportion to our bids. For example, if Danny bid $80 and I bid $40 to buy a hundred dollar sofa, then we’d buy it, with Danny chipping in twice as much as me, namely $67 to my $33.
Generosity without sacrificing social efficiency“The payments are simply what keep us honest in assessing that.”If you’re thinking “how mercenary all this is!” then, well, I’m unclear how you made it this far into this post. But it’s not nearly as cold as it may sound. We do nice things for each other all the time, and frequently use yootling to make sure it’s socially efficient to do so. Suppose I invite Danny to a sing-along showing of Once More With Feeling (this may or may not be hypothetical) and Danny doesn’t exactly want to go but can see that I have value for his company. He might (quite non-hypothetically) say “I’ll half-accompany you!” by which he means that he’ll yootle me for whether he goes or not. In other words, he magnanimously decides to treat his joining me as a 50/50 joint decision. If I have greater value for him coming than he has for not coming, then I’ll pay him to come. But if it’s the other way around, he will pay me to let him off the hook. We don’t actually care much about the payments, though those are necessary for the auction to work. We care about making sure that he comes to the Buffy sing-along if and only if my value for his company exceeds his value for staying home. The payments are simply what keep us honest in assessing that. The increased fairness — the winner sharing their utility with the loser — is icing.
Another attack on the resource-based model of willpower, Michael Inzlicht, Brandon J. Schmeichel and C. Neil Macrae have a paper called "Why Self-Control Seems (but may not be) Limited" in press in Trends in Cognitive Sciences. Ungated version here.
Some of the most interesting points:
- Over 100 studies appear to be consistent with self-control being a limited resource, but generally these studies do not observe resource depletion directly, but infer it from whether or not people's performance declines in a second self-control task.
- The only attempts to directly measure the loss or gain of a resource have been studies measuring blood glucose, but these studies have serious limitations, the most important being an inability to replicate evidence of mental effort actually affecting the level of glucose in the blood.
- Self-control also seems to replenish by things such as "watching a favorite television program, affirming some core value, or even praying", which would seem to conflict with the hypothesis inherent resource limitations. The resource-based model also seems evolutionarily implausible.
The authors offer their own theory of self-control. One-sentence summary (my formulation, not from the paper): "Our brains don't want to only work, because by doing some play on the side, we may come to discover things that will allow us to do even more valuable work."
- Ultimately, self-control limitations are proposed to be an exploration-exploitation tradeoff, "regulating the extent to which the control system favors task engagement (exploitation) versus task disengagement and sampling of other opportunities (exploration)".
- Research suggests that cognitive effort is inherently aversive, and that after humans have worked on some task for a while, "ever more resources are needed to counteract the aversiveness of work, or else people will gravitate toward inherently rewarding leisure instead". According to the model proposed by the authors, this allows the organism to both focus on activities that will provide it with rewards (exploitation), but also to disengage from them and seek activities which may be even more rewarding (exploration). Feelings such as boredom function to stop the organism from getting too fixated on individual tasks, and allow us to spend some time on tasks which might turn out to be even more valuable.
The explanation of the actual proposed psychological mechanism is good enough that it deserves to be quoted in full:
Based on the tradeoffs identified above, we propose that initial acts of control lead to shifts in motivation away from “have-to” or “ought-to” goals and toward “want-to” goals (see Figure 2). “Have-to” tasks are carried out through a sense of duty or contractual obligation, while “want-to” tasks are carried out because they are personally enjoyable and meaningful ; as such, “want-to” tasks feel easy to perform and to maintain in focal attention . The distinction between “have-to” and “want-to,” however, is not always clear cut, with some “want-to” goals (e.g., wanting to lose weight) being more introjected and feeling more like “have-to” goals because they are adopted out of a sense of duty, societal conformity, or guilt instead of anticipated pleasure .
According to decades of research on self-determination theory , the quality of motivation that people apply to a situation ranges from extrinsic motivation, whereby behavior is performed because of external demand or reward, to intrinsic motivation, whereby behavior is performed because it is inherently enjoyable and rewarding. Thus, when we suggest that depletion leads to a shift from “have-to” to “want-to” goals, we are suggesting that prior acts of cognitive effort lead people to prefer activities that they deem enjoyable or gratifying over activities that they feel they ought to do because it corresponds to some external pressure or introjected goal. For example, after initial cognitive exertion, restrained eaters prefer to indulge their sweet tooth rather than adhere to their strict views of what is appropriate to eat . Crucially, this shift from “have-to” to “want-to” can be offset when people become (internally or externally) motivated to perform a “have-to” task . Thus, it is not that people cannot control themselves on some externally mandated task (e.g., name colors, do not read words); it is that they do not feel like controlling themselves, preferring to indulge instead in more inherently enjoyable and easier pursuits (e.g., read words). Like fatigue, the effect is driven by reluctance and not incapability  (see Box 2).
Research is consistent with this motivational viewpoint. Although working hard at Time 1 tends to lead to less control on “have-to” tasks at Time 2, this effect is attenuated when participants are motivated to perform the Time 2 task , personally invested in the Time 2 task , or when they enjoy the Time 1 task . Similarly, although performance tends to falter after continuously performing a task for a long period, it returns to baseline when participants are rewarded for their efforts ; and remains stable for participants who have some control over and are thus engaged with the task . Motivation, in short, moderates depletion . We suggest that changes in task motivation also mediate depletion .
Depletion, however, is not simply less motivation overall. Rather, it is produced by lower motivation to engage in “have-to” tasks, yet higher motivation to engage in “want-to” tasks. Depletion stokes desire . Thus, working hard at Time 1 increases approach motivation, as indexed by self-reported states, impulsive responding, and sensitivity to inherently-rewarding, appetitive stimuli . This shift in motivational priorities from “have-to” to “want-to” means that depletion can increase the reward value of inherently-rewarding stimuli. For example, when depleted dieters see food cues, they show more activity in the orbitofrontal cortex, a brain area associated with coding reward value, compared to non-depleted dieters .
There's been a lot of fuss lately about Google's gadgets. Computers can drive cars - pretty amazing, eh? I guess. But what amazed me as a child was that people can drive cars. I'd sit in the back seat while an adult controlled a machine taking us at insane speeds through a cluttered, seemingly quite unsafe environment. I distinctly remember thinking that something about this just doesn't add up.
It looked to me like there was just no adequate mechanism to keep the car on the road. At the speeds cars travel, a tiny deviation from the correct course would take us flying off the road in just a couple of seconds. Yet the adults seemed pretty nonchalant about it - the adult in the driver's seat could have relaxed conversations with other people in the car. But I knew that people were pretty clumsy. I was an ungainly kid but I knew even the adults would bump into stuff, drop things and generally fumble from time to time. Why didn't that seem to happen in the car? I felt I was missing something. Maybe there were magnets in the road?
Now that I am a driving adult I could more or less explain this to a 12-year-old me:
1. Yes, the course needs to be controlled very exactly and you need to make constant tiny course corrections or you're off to a serious accident in no time.
2. Fortunately, the steering wheel is a really good instrument for making small course corrections. The design is somewhat clumsiness-resistant.
3. Nevertheless, you really are just one misstep away from death and you need to focus intently. You can't take your eyes off the road for even one second. Under good circumstances, you can have light conversations while driving but a big part of your mind is still tied up by the task.
4. People can drive cars - but only just barely. You can't do it safely even while only mildly inebriated. That's not just an arbitrary law - the hit to your reflexes substantially increases the risks. You can do pretty much all other normal tasks after a couple of drinks, but not this.
So my 12-year-old self was not completely mistaken but still ultimately wrong. There are no magnets in the road. The explanation for why driving works out is mostly that people are just somewhat more capable than I'd thought. In my more sunny moments I hope that I'm making similar errors when thinking about artificial intelligence. Maybe creating a safe AGI isn't as impossible as it looks to me. Maybe it isn't beyond human capabilities. Maybe.
Edit: I intended no real analogy between AGI design and driving or car design - just the general observation that people are sometimes more competent than I expect. I find it interesting that multiple commenters note that they have also been puzzled by the relative safety of traffic. I'm not sure what lesson to draw.
This was originally a comment to VipulNaik's recent indagations about the academic lifestyle versus the job lifestyle. Instead of calling it lifestyle he called them career options, but I'm taking a different emphasis here on purpose.
Due to information hazards risks, I recommend that Effective Altruists who are still wavering back and forth do not read this. Spoiler EA alert.
I'd just like to provide a cultural difference information that I have consistently noted between Americans and Brazilians which seems relevant here.
To have a job and work in the US is taken as a *de facto* biological need. It is as abnormal for an American, in my experience, to consider not working, as it is to consider not breathing, or not eating. It just doesn't cross people's minds.
If anyone has insight above and beyond "Protestant ethics and the spirit of capitalism" let me know about it, I've been waiting for the "why?" for years.
So yeah, let me remind people that you can spend years and years not working. that not getting a job isn't going to kill you or make you less healthy, that ultravagabonding is possible and feasible and many do it for over six months a year, that I have a friend who lives as the boyfriend of his sponsor's wife in a triad and somehow never worked a day in his life (the husband of the triad pays it all, both men are straight). That I've hosted an Argentinian who left graduate economics for two years to randomly travel the world, ended up in Rome and passed by here in his way back, through couchsurfing. That Puneet Sahani has been well over two years travelling the world with no money and an Indian passport now. I've also hosted a lovely estonian gentleman who works on computers 4 months a year in London to earn pounds, and spends eight months a year getting to know countries while learning their culture etc... Brazil was his third country.
Oh, and never forget the Uruguay couple I just met at a dance festival who have been travelling as hippies around and around South America for 5 years now, and showed no sign of owning more than 500 dollars worth of stuff.
Also in case you'd like to live in a paradise valley taking Santo Daime (a religious ritual with DMT) about twice a week, you can do it with a salary of aproximatelly 500 dollars per month in Vale do Gamarra, where I just spent carnival, that is what the guy who drove us back did. Given Brazilian or Turkish returns on investment, that would cost you 50 000 bucks in case you refused to work within the land itself for the 500.
Oh, I forgot to mention that though it certainly makes you unable to do expensive stuff, thus removing the paradox of choice and part of your existential angst from you (uhuu less choices!), there is nearly no detraction in status from not having a job. In fact, during these years in which I was either being an EA and directing an NGO, or studying on my own, or doing a Masters (which, let's agree is not very time consuming) my status has increased steadily, and many opportunities would have been lost if I had a job that wouldn't let me move freely. Things like being invited as Visiting Scholar to Singularity Institute, like giving a TED talk, like directing IERFH, and like spending a month working at FHI with Bostrom, Sandberg, and the classic Lesswrong poster Stuart Armstrong.
So when thinking about what to do with you future my dear fellow Americans, please, at least consider not getting a job. At least admit what everyone knows from the bottom of their hearts, that jobs are abundant for high IQ people (specially you my programmer lurker readers.... I know you are there...and you native English speakers, I can see you there, unnecessarily worrying about your earning potential).
A job is truly an instrumental goal, and your terminal goals certainly do have chains of causation leading to them that do not contain a job for 330 days a year. Unless you are a workaholic who experiences flow in virtue of pursuing instrumental goals. Then please, work all day long, donate as much as you can, and may your life be awesome!
About a year ago, I attended my first CFAR workshop and wrote a post about it here. I mentioned in that post that it was too soon for me to tell if the workshop would have a large positive impact on my life. In the comments to that post, I was asked to follow up on that post in a year to better evaluate that impact. So here we are!
Very short summary: overall I think the workshop had a large and persistent positive impact on my life.
However, anyone using this post to evaluate the value of going to a CFAR workshop themselves should be aware that I'm local to Berkeley and have had many opportunities to stay connected to CFAR and the rationalist community. More specifically, in addition to the January workshop, I also
- visited the March workshop (and possibly others),
- attended various social events held by members of the community,
- taught at the July workshop, and
- taught at SPARC.
These experiences were all very helpful in helping me digest and reinforce the workshop material (which was also improving over time), and a typical workshop participant might not have these advantages.
Answering a question
pewpewlasergun wanted me to answer the following question:
I'd like to know how many techniques you were taught at the meetup you still use regularly. Also which has had the largest effect on your life.
The short answer is: in some sense very few, but a lot of the value I got out of attending the workshop didn't come from specific techniques.
In more detail: to be honest, many of the specific techniques are kind of a chore to use (at least as of January 2013). I experimented with a good number of them in the months after the workshop, and most of them haven't stuck (but that isn't so bad; the cost of trying a technique and finding that it doesn't work for you is low, while the benefit of trying a technique and finding that it does work for you can be quite high!). One that has is the idea of a next action, which I've found incredibly useful. Next actions are the things that to-do list items should be, say in the context of using Remember The Milk. Many to-do list items you might be tempted to right down are difficult to actually do because they're either too vague or too big and hence trigger ugh fields. For example, you might have an item like
- Do my taxes
that you don't get around to until right before you have to because you have an ugh field around doing your taxes. This item is both too vague and too big: instead of writing this down, write down the next physical action you need to take to make progress on this item, which might be something more like
- Find tax forms and put them on desk
which is both concrete and small. Thinking in terms of next actions has been a huge upgrade to my GTD system (as was Workflowy, which I also started using because of the workshop) and I do it constantly.
But as I mentioned, a lot of the value I got out of attending the workshop was not from specific techniques. Much of the value comes from spending time with the workshop instructors and participants, which had effects that I find hard to summarize, but I'll try to describe some of them below:
The workshop readjusted my emotional attitudes towards several things for the better, and at several meta levels. For example, a short conversation with a workshop alum completely readjusted my emotional attitude towards both nutrition and exercise, and I started paying more attention to what I ate and going to the gym (albeit sporadically) for the first time in my life not long afterwards. I lost about 15 pounds this way (mostly from the eating part, not the gym part, I think).
At a higher meta level, I did a fair amount of experimenting with various lifestyle changes (cold showers, not shampooing) after the workshop and overall they had the effect of readjusting my emotional attitude towards change. I find it generally easier to change my behavior than I used to because I've had a lot of practice at it lately, and am more enthusiastic about the prospect of such changes.
(Incidentally, I think emotional attitude adjustment is an underrated component of causing people to change their behavior, at least here on LW.)
Using all of my strength
The workshop is the first place I really understood, on a gut level, that I could use my brain to think about something other than math. It sounds silly when I phrase it like that, but at some point in the past I had incorporated into my identity that I was good at math but absentminded and silly about real-world matters, and I used it as an excuse not to fully engage intellectually with anything that wasn't math, especially anything practical. One way or another the workshop helped me realize this, and I stopped thinking this way.
The result is that I constantly apply optimization power to situations I wouldn't have even tried to apply optimization power to before. For example, today I was trying to figure out why the water in my bathroom sink was draining so slowly. At first I thought it was because the strainer had become clogged with gunk, so I cleaned the strainer, but then I found out that even with the strainer removed the water was still draining slowly. In the past I might've given up here. Instead I looked around for something that would fit farther into the sink than my fingers and saw the handle of my plunger. I pumped the handle into the sink a few times and some extra gunk I hadn't known was there came out. The sink is fine now. (This might seem small to people who are more domestically talented than me, but trust me when I say I wasn't doing stuff like this before last year.)
Reflection and repair
Thanks to the workshop, my GTD system is now robust enough to consistently enable me to reflect on and repair my life (including my GTD system). For example, I'm quicker to attempt to deal with minor medical problems I have than I used to be. I also think more often about what I'm doing and whether I could be doing something better. In this regard I pay a lot of attention in particular to what habits I'm forming, although I don't use the specific techniques in the relevant CFAR unit.
For example, at some point I had recorded in RTM that I was frustrated by the sensation of hours going by without remembering how I had spent them (usually because I was mindlessly browsing the internet). In response, I started keeping a record of what I was doing every half hour and categorizing each hour according to a combination of how productively and how intentionally I spent it (in the first iteration it was just how productively I spent it, but I found that this was making me feel too guilty about relaxing). For example:
- a half-hour intentionally spent reading a paper is marked green.
- a half-hour half-spent writing up solutions to a problem set and half-spent on Facebook is marked yellow.
- a half-hour intentionally spent playing a video game is marked with no color.
- a half-hour mindlessly browsing the internet when I had intended to do work is marked red.
The act of doing this every half hour itself helps make me more mindful about how I spend my time, but having a record of how I spend my time has also helped me notice interesting things, like how less of my time is under my direct control than I had thought (but instead is taken up by classes, commuting, eating, etc.). It's also easier for me to get into a success spiral when I see a lot of green.
Being around workshop instructors and participants is consistently intellectually stimulating. I don't have a tactful way of saying what I'm about to say next, but: two effects of this are that I think more interesting thoughts than I used to and also that I'm funnier than I used to be. (I realize that these are both hard to quantify.)
I worry that I haven't given a complete picture here, but hopefully anything I've left out will be brought up in the comments one way or another. (Edit: this totally happened! Please read Anna Salamon's comment below.)
Takeaway for prospective workshop attendees
I'm not actually sure what you should take away from all this if your goal is to figure out whether you should attend a workshop yourself. My thoughts are roughly this: I think attending a workshop is potentially high-value and therefore that even talking to CFAR about any questions you might have is potentially high-value, in addition to being relatively low-cost. If you think there's even a small chance you could get a lot of value out of attending a workshop I recommend that you at least take that one step.
Neal Stephenson's The Diamond Age takes place several decades in the future and this conversation is looking back on the present day:
"You know, when I was a young man, hypocrisy was deemed the worst of vices,” Finkle-McGraw said. “It was all because of moral relativism. You see, in that sort of a climate, you are not allowed to criticise others-after all, if there is no absolute right and wrong, then what grounds is there for criticism?" [...]
"Now, this led to a good deal of general frustration, for people are naturally censorious and love nothing better than to criticise others’ shortcomings. And so it was that they seized on hypocrisy and elevated it from a ubiquitous peccadillo into the monarch of all vices. For, you see, even if there is no right and wrong, you can find grounds to criticise another person by contrasting what he has espoused with what he has actually done. In this case, you are not making any judgment whatsoever as to the correctness of his views or the morality of his behaviour-you are merely pointing out that he has said one thing and done another. Virtually all political discourse in the days of my youth was devoted to the ferreting out of hypocrisy." [...]
"We take a somewhat different view of hypocrisy," Finkle-McGraw continued. "In the late-twentieth-century Weltanschauung, a hypocrite was someone who espoused high moral views as part of a planned campaign of deception-he never held these beliefs sincerely and routinely violated them in privacy. Of course, most hypocrites are not like that. Most of the time it's a spirit-is-willing, flesh-is-weak sort of thing."
"That we occasionally violate our own stated moral code," Major Napier said, working it through, "does not imply that we are insincere in espousing that code."
I'm not sure if I agree with this characterization of the current political climate; in any case, that's not the point I'm interested in. I'm also not interested in moral relativism.
But the passage does point out a flaw which I recognize in myself: a preference for consistency over actually doing the right thing. I place a lot of stock--as I think many here do--on self-consistency. After all, clearly any moral code which is inconsistent is wrong. But dismissing a moral code for inconsistency or a person for hypocrisy is lazy. Morality is hard. It's easy to get a warm glow from the nice self-consistency of your own principles and mistake this for actually being right.
Placing too much emphasis on consistency led me to at least one embarrassing failure. I decided that no one who ate meat could be taken seriously when discussing animal rights: killing animals because they taste good seems completely inconsistent with placing any value on their lives. Furthermore, I myself ignored the whole concept of animal rights because I eat meat, so that it would be inconsistent for me to assign animals any rights. Consistency between my moral principles and my actions--not being a hypocrite--was more important to me than actually figuring out what the correct moral principles were.
To generalize: holding high moral ideals is going to produce cognitive dissonance when you are not able to live up to those ideals. It is always tempting--for me at least--to resolve this dissonance by backing down from those high ideals. An alternative we might try is to be more comfortable with hypocrisy.
Is anyone interested in contacting other people in the LessWrong community to find a job, employee, business partner, co-founder, adviser, or investor?
Connections like this develop inside ethnic and religious groups, as well as among university alums or members of a fraternity. I think that LessWrong can provide the same value.
For example, LessWrong must have plenty of skilled software developers in dull jobs, who would love to work with smart, agenty rationalists. Likewise, there must be some company founders or managers who are having a very hard time finding good software developers.
A shared commitment to instrumental and epistemic rationality should be a good starting point, not to mentioned a shared memeplex to help break the ice. (Paperclips! MoR!)
Besides being fun, working together with other rationalists could be a good business move.
As a side-benefit, it also has good potential to raise the sanity waterline and help further develop new rationality skills, both personally and as a community.
Naturally, such a connection is not guaranteed to produce results. But it's hard to find the right people to work with: Why not try this route? And although you can cold-contact someone you've seen online, you don't know who's interested in what you have to offer, so I think more effort is needed to bootstrap such networking.
I'd like to gauge interest. (Alexandros has volunteered to help.) If you might be interested in this sort of networking, please fill out this short Google Form [Edit: Survey closed as of April 15]. I'll post an update about what sort of response we get.
Privacy: Although the main purpose of this form is to gauge interest, and other details may be needed to form good connections, the info might be enough to get some contacts going. So, we might use this information to personally connect people. We won't share the info or build any online group with it. If we get a lot of interest we may later create some sort of online mechanism, but we’ll be sure to get your permission before adding you.
Edit April 6: We're still seeing that people are filling out the form, so we'll wait a week or two, and report on results.
Edit April 15: See some comments on the results at this comment, below.
I used to spend a lot of time thinking about formal ethics, trying to figure out whether I was leaning more towards positive or negative utilitarianism, about the best courses of action in light of the ethical theories that I currently considered the most correct, and so on. From the discussions that I've seen on this site, I expect that a lot of others have been doing the same, or at least something similar.
I now think that doing this has been more harmful than it has been useful, for two reasons: there's no strong evidence to assume that this will give us very good insight to our preferred ethical theories, and more importantly, because thinking in those terms will easily lead to akrasia.
1: Little expected insight
This seems like a relatively straightforward inference from all the discussion we've had about complexity of value and the limits of introspection, so I'll be brief. I think that attempting to come up with a verbal formalization of our underlying logic and then doing what that formalization dictates is akin to "playing baseball with verbal probabilities". Any introspective access we have into our minds is very limited, and at best, we can achieve an accurate characterization of the ethics endorsed by the most verbal/linguistic parts of our minds. (At least at the moment, future progress in moral psychology or neuroscience may eventually change this.) Because our morals are also derived from parts of our brains to which we don't have such access, our theories will unavoidably be incomplete. We are also prone to excessive rationalization when it comes to thinking about morality: see Joshua Greene and others for evidence suggesting that much of our verbal reasoning is actually just post-hoc rationalizations for underlying moral intuitions.
One could try to make the argument from Dutch Books and consistency, and argue that if we don't explicitly formulate our ethics and work out possible contradictions, we may end up doing things that work cross-purposes. E.g. maybe my morality says that X is good, but I don't realize this and therefore end up doing things that go against X. This is probably true to some extent, but I think that evaluating the effectiveness of various instrumental approaches (e.g. the kind of work that GiveWell is doing) is much more valuable for people who have at least a rough idea of what they want, and that the kinds of details that formal ethics focuses on (including many of the discussions on this site, such as this post of mine) are akin to trying to calculate something to the 6th digit of precision when our instruments only measure things at 3 digits of precision.
To summarize this point, I've increasingly come to think that living one's life according to the judgments of any formal ethical system gets it backwards - any such system is just a crude attempt of formalizing our various intuitions and desires, and they're mostly useless in determining what we should actually do. To the extent that the things that I do resemble the recommendations of utilitarianism (say), it's because my natural desires happen to align with utilitarianism's recommended courses of action, and if I say that I lean towards utilitarianism, it just means that utilitarianism produces the least recommendations that would conflict with what I would want to do anyway.
2: Leads to akrasia
Trying to follow the formal theories can be actively harmful towards pretty much any of the goals we have, because the theories and formalizations that the verbal parts of our minds find intellectually compelling are different from the ones that actually motivate us to action.
For example, Carl Shulman comments on why one shouldn't try to follow utilitarianism to the letter:
As those who know me can attest, I often make the point that radical self-sacrificing utilitarianism isn't found in humans and isn't a good target to aim for. Almost no one would actually take on serious harm with certainty for a small chance of helping distant others. Robin Hanson often presents evidence for this, e.g. this presentation on "why doesn't anyone create investment funds for future people?" However, sometimes people caught up in thoughts of the good they can do, or a self-image of making a big difference in the world, are motivated to think of themselves as really being motivated primarily by helping others as such. Sometimes they go on to an excessive smart sincere syndrome, and try (at the conscious/explicit level) to favor altruism at the severe expense of their other motivations: self-concern, relationships, warm fuzzy feelings.
Usually this doesn't work out well, as the explicit reasoning about principles and ideals is gradually overridden by other mental processes, leading to exhaustion, burnout, or disillusionment. The situation winds up worse according to all of the person's motivations, even altruism. Burnout means less good gets done than would have been achieved by leading a more balanced life that paid due respect to all one's values. Even more self-defeatingly, if one actually does make severe sacrifices, it will tend to repel bystanders.
Even if one avoided that particular failure mode, there remains the more general problem that very few people find it easy to be generally motivated by things like "what does this abstract ethical theory say I should do next". Rather, they are motivated by e.g. a sense of empathy and a desire to prevent others from suffering. But if we focus too much on constructing elaborate ethical theories, it becomes much too easy to start thinking excessively in terms of "what would this theory say I should do" and forget entirely about the original motivation that led us to formulate that theory. Then, because an abstract theory isn't intrinsically compelling in the same way that an emphatic concern over suffering is, we end up with a feeling of obligation that we should do something (e.g. some concrete action that would reduce the suffering of others), but not an actual intrinsic desire to really do it. Which leads to the kinds of action that are optimizing towards the goal of stop feeling that obligation, rather than the actual goal. This can manifest itself via things such as excessive procrastination. (See also this discussion of how "have-to" goals require willpower to accomplish, whereas "want-to" goals are done effortlessly.)
The following is an excerpt from Trying Not To Try by Edward Slingerland that makes the same point, discussing the example of an ancient king who thought himself selfish because he didn't care about his subjects, but who did care about his family, and who did spare the life of an ox when he couldn't face to see its distress as it was about to be slaughtered:
Mencius also suggests trying to expand the circle of concern by beginning with familial feelings. Focus on the respect you have for the elders in your family, he tells the king, and the desire you have to protect and care for your children. Strengthen these feelings by both reflecting on them and putting them into practice. Compassion starts at home. Then, once you’re good at this, try expanding this feeling to the old and young people in other families. We have to imagine the king is meant to start with the families of his closest peers, who are presumably easier to empathize with, and then work his way out to more and more distant people, until he finally finds himself able to respect and care for the commoners. “One who is able to extend his kindness in this way will be able to care for everyone in the world,” Mencius concludes, “while one who cannot will find himself unable to care for even his own wife and children. That in which the ancients greatly surpassed others was none other than this: they were good at extending their behavior, that is all.”
Mencian wu-wei cultivation is about feeling and imagination, not abstract reason or rational arguments, and he gets a lot of support on this from contemporary science. The fact that imaginative extension is more effective than abstract reasoning when it comes to changing people’s behavior is a direct consequence of the action-based nature of our embodied mind. There is a growing consensus, for instance, that human thought is grounded in, and structured by, our sensorimotor experience of the world. In other words, we think in images. This is not to say that we necessarily think in pictures. An “image” in this sense could be the feeling of what it’s like to lift a heavy object or to slog in a pair of boots through some thick mud. [...]
Here again, Mencius seems prescient. The Mohists, like their modern utilitarian cousins, think that good behavior is the result of digital thinking. Your disembodied mind reduces the goods in the world to numerical values, does the math, and then imposes the results onto the body, which itself contributes nothing to the process. Mencius, on the contrary, is arguing that changing your behavior is an analog process: education needs to be holistic, drawing upon your embodied experience, your emotions and perceptions, and employing imagistic reflection and extension as its main tools. Simply telling King Xuan of Qi that he ought to feel compassion for the common people doesn’t get you very far. It would be similarly ineffective to ask him to reason abstractly about the illogical nature of caring for an ox while neglecting real live humans who are suffering as a result of his misrule. The only way to change his behavior—to nudge his wu-wei tendencies in the right direction—is to lead him through some guided exercises. We are analog beings living in an analog world. We think in images, which means that both learning and teaching depend fundamentally on the power of our imagination.
In his popular work on cultivating happiness, Jonathan Haidt draws on the metaphor of a rider (the conscious mind) trying to work together with and tame an elephant (the embodied unconscious). The problem with purely rational models of moral education, he notes, is that they try to “take the rider off the elephant and train him to solve problems on his own,” through classroom instruction and abstract principles. They take the digital route, and the results are predictable: “The “class ends, the rider gets back on the elephant, and nothing changes at recess.” True moral education needs to be analog. Haidt brings this point home by noting that, as a philosophy major in college, he was rationally convinced by Peter Singer’s arguments for the moral superiority of vegetarianism. This cold conviction, however, had no impact on his actual behavior. What convinced Haidt to become a vegetarian (at least temporarily) was seeing a video of a slaughterhouse in action—his wu-wei tendencies could be shifted only by a powerful image, not by an irrefutable argument.
My personal experience of late has also been that thinking in terms of "what does utilitarianism dictate I should do" produces recommendations that feel like external obligations, "shoulds" that are unlikely to get done; whereas thinking about e.g. the feelings of empathy that motivated me to become utilitarian in the first place produce motivations that feel like internal "wants". I was very close to (yet another) burnout and serious depression some weeks back: a large part of what allowed me to avoid it was that I stopped entirely asking the question of what I should do, and began to focus entirely on what I want to do, including the question of which of my currently existing wants are ones that I'd wish to cultivate further. (Of course there are some things like doing my tax returns that I do have to do despite not wanting to, but that's a question of necessity, not ethics.) It's way too short of a time to say whether this actually leads to increased productivity in the long term, but at least it feels great for my mental health, at least for the time being.
In Magic: the Gathering and other popular card games, advanced players have developed the notion of a "win-more" card. A "win-more" card is one that works very well, but only if you're already winning. In other words, it never helps turn a loss into a win, but it is very good at turning a win into a blowout. This type of card seems strong at first, but since these games usually do not use margin of victory scoring in tournaments, they end up being a trap-- instead of using cards that convert wins into blowouts, you want to use cards that convert losses into wins.
This concept is useful and important and you should never tell a new player about it, because it tends to make them worse at the game. Without a more experienced player's understanding of core concepts, it's easy to make mistakes and label cards that are actually good as being win-more.
This is an especially dangerous mistake to make because it's relatively uncommon for an outright bad card to seem like a win-more card; win-more cards are almost always cards that look really good at first. That means that if you end up being too wary of win-more cards, you're going to end up misclassifying good cards as bad, and that's an extremely dangerous mistake to make. Misclassifying bad cards as good is relatively easy to deal with, because you'll use them and see that they aren't good; misclassifying good cards as bad is much more dangerous, because you won't play them and therefore won't get the evidence you need to update your position.
I call this the "win-more problem." Concepts that suffer from the win-more problem are those that-- while certainly useful to an advanced user-- are misleading or net harmful to a less skillful person. Further, they are wrong or harmful in ways that are difficult to detect, because they screen off feedback loops that would otherwise allow someone to realize the mistake.
Scott, known on LessWrong as Yvain, recently wrote a post complaining about an inaccurate rape statistic.
Arthur Chu, who is notable for winning money on Jeopardy recently, argued against Scott's stance that we should be honest in arguments in a comment thread on Jeff Kaufman's Facebook profile, which can be read here.
Scott just responded here, with a number of points relevant to the topic of rationalist communities.
I am interested in what LW thinks of this.
Obviously, at some point being polite in our arguments is silly. I'd be interested in people's opinions of how dire the real world consequences have to be before it's worthwhile debating dishonestly.
- First "official" program to practice suspended animation
- The article naturally goes on to ask whether longer SA (months, years) is possible
- Amazing quote: "Every day at work I declare people dead. They have no signs of life, no heartbeat, no brain activity. I sign a piece of paper knowing in my heart that they are not actually dead. I could, right then and there, suspend them. But I have to put them in a body bag. It's frustrating to know there's a solution."
- IMO this if (I hope!) successful, will go a long way to bridge the emotional gap for cryonics
So I know we've already seen them buying a bunch of ML and robotics companies, but now they're purchasing Shane Legg's AGI startup. This is after they've acquired Boston Dynamics, several smaller robotics and ML firms, and started their own life-extension firm.
Is it just me, or are they trying to make Accelerando or something closely related actually happen? Given that they're buying up real experts and not just "AI is inevitable" prediction geeks (who shall remain politely unnamed out of respect for their real, original expertise in machine learning), has someone had a polite word with them about not killing all humans by sheer accident?
This post is inspired by a recent comment thread on my Facebook. I asked people to respond with whether or not they kept fire/lock boxes in their homes for their important documents (mainly to prove to a friend that this is a Thing People Do). It was pretty evenly divided, with slightly more people having them, than not. The interesting pattern I noticed was that almost ALL of my non-rationality community friends DID have them, and almost NONE of my rationality community friends did, and some hadn't even considered it.
This could be because getting a lock box is not an optimal use of time or money, OR it could be because rationalists often overlook the mundane household-y things more than the average person. I'm actually not certain which it is, so am writing this post presenting the case of why you should keep certain emergency items in the hope that either I'll get some interesting points for why you shouldn't prep that I haven't thought of yet, OR will get even better ideas in the comments.
Many LWers are concerned about x-risks that have a small chance of causing massive damage. We may or may not see this occur in our lifetime. However, there are small problems that occur every 2-3 years or so (extended blackout, being snowed in, etc), and there are mid-sized catastrophes that you might see a couple times in your life (blizzards, hurricanes, etc). It is likely that at least once in your life you will be snowed in your house and the pipes will burst or freeze (or whatever the local equivalent is, if you live in a warmer climate). Having the basic preparations ready for these occurrences is low cost (many minor emergencies require a similar set of preparations), and high payoff.
Medicine and Hospitality
This category is so minor, you probably don't consider it to be "emergency", but it's still A Thing To Prepare For. It really sucks having to go to the store when you're sick because you don't already have the medicine you need at hand. It's better to keep the basics always available, just in case. You, or a guest, are likely to be grateful that you have these on hand. Even if you personally never get sick, I consider a well-stocked medicine cabinet to be a point of hospitality. If you have people over to your place with any frequency, it is nice to have:
- Pain Reliever (ibuprofen, NSAID)
- Zyrtec (Especially if you have cats. Guests might be allergic!)
- Antacids, Chewable Pepto, Gas-X (Especially if you have people over for food)
- Multipurpose contact solution (getting something in your contact without any solution nearby is both rare and awful)
- Neosporin/bandaids (esp. if your cats scratch :P)
- Spare toothbrush (esp. if you might have a multi-day guest)
- Single use disposable toothbrushes (such as Wisp). These are also good to carry with you in your backpack or purse.)
- Pads/tampons (Yes, even if you're a guy. They should be somewhere obvious such as under the sink, so that your guest doesn't need to ask)
- Protein/ granola bar
- Jumper Cables
- Spare Tire and jack
- If you get frequent headaches or the like, you might also want to keep your preferred pain reliever or whatnot in the car
Minor Catastrophe Preparation
These are somewhat geography dependent. Adjust for whatever catastrophes are common in your area. There are places where if you don't have 4 wheel drive, you're just not going to be able to leave your house during a snowstorm. There are places where tornadoes or earthquakes are common. There are places where a bad hurricane rolls through every couple years. If you're new to an area, make sure you know what the local "regular" emergency is.
Some of these are a bit of a harder sell, I think.
- Flashlights (that you can find in the dark)
- Spare batteries
- Water (ready.gov says one gallon per person per day, and have enough for 3 days)
- Non perishable food (ideally that doesn't need to be cooked, e.g. canned goods)
- Manual can opener
- Fire Extinguisher
- Action: check out ready.gov for the emergencies that are most common for your area, and read their recommendations
- A "Go Bag" (something pre-packed that you can grab and go)
- A fire-safe lock box (not only does this protect your documents, but it helps in organizing that there is an obvious place where these important documents go, and not just "somewhere in that file drawer...or somewhere else")
- Back up your data in the cloud
- Moar water, moar food
I'm not sure I've ever seen such a compelling "rationality success story". There's so much that's right here.
The part that really grabs me about this is that there's no indication that his success has depended on "natural" skill or talent. And none of the strategies he's using are from novel research. He just studied the "literature" and took the results seriously. He didn't arbitrarily deviate from the known best practice based on aesthetics or intuition. And he kept a simple, single-minded focus on his goal. No lost purposes here --- just win as much money as possible, bank the winnings, and use it to self-insure. It's rationality-as-winning, plain and simple.
[Summary: Trying to use new ideas is more productive than trying to evaluate them.]
I haven't posted to LessWrong in a long time. I have a fan-fiction blog where I post theories about writing and literature. Topics don't overlap at all between the two websites (so far), but I prioritize posting there much higher than posting here, because responses seem more productive there.
The key difference, I think, is that people who read posts on LessWrong ask whether they're "true" or "false", while the writers who read my posts on writing want to write. If I say something that doesn't ring true to one of them, he's likely to say, "I don't think that's quite right; try changing X to Y," or, "When I'm in that situation, I find Z more helpful", or, "That doesn't cover all the cases, but if we expand your idea in this way..."
Whereas on LessWrong a more typical response would be, "Aha, I've found a case for which your step 7 fails! GOTCHA!"
It's always clear from the context of a writing blog why a piece of information might be useful. It often isn't clear how a LessWrong post might be useful. You could blame the author for not providing you with that context. Or, you could be pro-active and provide that context yourself, by thinking as you read a post about how it fits into the bigger framework of questions about rationality, utility, philosophy, ethics, and the future, and thinking about what questions and goals you have that it might be relevant to.
For reasons mentioned in So8res article as well as for other reasons: studying with a partner can be very good. In November, Adele_L had posted an article for people wanting to find a study partner. It got 17 comments, but only 1 since November 16th. So I thought we (I) should make a monthly thread on this instead of constantly going back to an old article which people might (seem to) forget about. If people seem to agree with that, I will make a post about it every month.
So if you're looking for a study partner for an online course or reading a manual (whether it's in the MIRI course list or not) tell others in the comment section.
I've had a manageable-but-important Problem for a few months now (financial in kind, details neither relevant nor interesting), of moderate complexity and relatively minor importance unless I leave it unsolved just a little longer.
Unfortunately, this seems to be the precise combination of things that triggers one of my ugh fields, which manifests subjectively as a fuzzy blank inability to maintain focus. Several times last week, it occurred to me that I should really Solve The Problem, but I wasn't able to get myself to spend any time thinking about it. Like, at all.
On Saturday, the Problem found itself top of mind once again. How irritating that I couldn't solve the Problem because it was the weekend, and when it wasn't the weekend, maybe Tuesday when work wasn't busy and the Bureau was open, I should really email Dr. Somebody and call Mrs. Administrator for the ...
I had a solution, and a plan. What the what?
My working theory is that when there's no chance of actually Doing Something, this particular ugh field deactivates.
To me, this suggests a strategy (of uncertain generalizability): when an ugh field is preventing thought about something important, find a time when action is impossible and use it to generate a plan.
I would feel better about this advice if it had a deep theoretical backer. Anybody?
In his recent excellent blog post, Yvain discusses a few "universal" (commonplace) human experiences that many people never notice they don't have, such as the ability to smell, see some colors, see mental pictures, and feel emotions. I was reminded of a longstanding argument I had with a friend. She always insisted that she would rather be blind than deaf. I could not understand how that was possible, since the visual world is so much richer and more interesting. We later found out that I can see an order of magnitude more colors than she can, but have subpar ability to distinguish tones. And I thought she was just being a contrarian for its own sake. I thought the experience of that many colors was universal, and had rarely seen evidence that challenged that belief.
More seriously, a good friend of mine did not realize he suffered from a serious genetic disorder that caused him extreme body pain and terrible headaches whenever he became tired or dehydrated for the first three decades of his life. He thought everyone felt that way, but considered it whiny to talk about it. He almost never mentioned it, and never realized what it was, until <bragging> I noticed how tense his expressions became when he got tired, asked him about it, then put it together with some other unusual physical experiences I knew he had </bragging>
This got me thinking about when it is likely we might be having unusual sensory experiences and not realize for long periods of time. I am calling these "secretly secret experiences." Here are the factors that might increase the likelihood of having a secretly secret experience.
1) When they are rarely consciously mentally examined: experiences such as the ability to distinguish subtle differences in shades of color are tested occasionally (when choosing paint or ripe fruit), but few people besides interior decorators think about how good their shade-distinguishing skills are. Others include that feeling of being in different moods or mental states, breathing, sensing commonly-sensed things (the look of roads or the sound of voices, etc.) Most of the examples from the blog post fall under this category. People might not notice that they over- or under-experience or differently experience such feelings, relative to others.
2) When they are rarely discussed in everyday life: If my experience of pooping feels very different from other peoples' I may never know, because I don't discuss the experience in detail with anyone. If people talked about their experiences, I would probably notice if mine didn't match up, but that's unlikely to happen. The same might apply for other experiences that are taboo to discuss, such as masturbation, sex (in some cultures), anything considered gross or unhygienic, or socially awkward experiences (in some cultures).
3) When there is social pressure to experience something a certain way: it may be socially dangerous to admit you don't find members of the opposite sex attractive, or you didn't enjoy The Godfather or whatever. Depending on your sensitivity to social pressure (see 4) and the strength of the pressure, this could lead to unawareness about true rare preferences.
4) Sensitivity to external influences: Some people pick up on social cues more easily than others. Some notice social norms more readily, and some seem more or less willing to violate some norms (partly because of how well they perceive them, plus some other factors). I can imagine that a deeply autistic person might be influenced far less by mainstream descriptions of different experiences. Exceptionally socially attuned people might (perhaps) take social influences to heart and be less able to distinguish their own from those they know about.
5) When skills are redundant or you have good substitutes: For example, if we live in a world with only fish and mammals, and all mammals are brown and warm and all fish are cold and silver, you might never notice that you can't feel temperature because you are still a perfectly good mammal and fish distinguisher. In the real world, it's harder to find clear examples, but I can think of substitutes for color-sightedness such as shade and textural cues that increase the likelihood of a color-blind person not realizing zir blindness. Similarly, empathy and social adeptness may increase someone's ability both to mask that ze is having a different experience than others, and the likelihood that ze will believe all others are good at hiding a different experience than the one they portray openly.
What else can people think of?
Special thanks to JT for his feedback and for letting me share his story.
In late December 2013, Jonah, my collaborator at Cognito Mentoring, announced the service on LessWrong. Information about the service was also circulated in other venues with high concentrations of gifted and intellectually curious people. Since then, we're received ~70 emails asking for mentoring from learners across all ages, plus a few parents. At least 40 of our advisees heard of us through LessWrong, and the number is probably around 50. Of the 23 who responded to our advisee satisfaction survey, 16 filled in information on where they'd heard of us, and 14 of those 16 had heard of us from LessWrong. The vast majority of student advisees with whom we had substantive interactions, and the ones we felt we were able to help the most, came from LessWrong (we got some parents through the Davidson Forum post, but that's a very different sort of advising).
In this post, I discuss some common themes that emerged from our interaction with these advisees. Obviously, this isn't a comprehensive picture of the LessWrong community the way that Yvain's 2013 survey results were.
- A significant fraction of the people who contacted us via LessWrong aren't active LessWrong participants, and many don't even have user accounts on LessWrong. The prototypical advisees we got through LessWrong don't have many distinctive LessWrongian beliefs. Many of them use LessWrong primarily as a source of interesting stuff to read, rather than a community to be part of.
- About 25% of the advisees we got through LessWrong were female, and a slightly higher proportion of the advisees with whom we had substantive interaction (and subjectively feel we helped a lot) were female. You can see this by looking at the sex distribution of the public reviews of us from students.
- Our advisees included people in high school (typically, grades 11 and 12) and college. Our advisees in high school tended to be interested in mathematics, computer science, physics, engineering, and entrepreneurship. We did have a few who were interested in economics, philosophy, and the social sciences as well, but this was rarer. Our advisees in college and graduate school were also interested in the above subjects but skewed a bit more in the direction of being interested in philosophy, psychology, and economics.
- Somewhat surprisingly and endearingly, many of our advisees were interested in effective altruism and social impact. Some had already heard of the cluster of effective altruist ideas. Others were interested in generating social impact through entrepreneurship or choosing an impactful career, even though they weren't familiar with effective altruism until we pointed them to it. Of those who had heard of effective altruism as a cluster of ideas, some had either already consulted with or were planning to consult with 80,000 Hours, and were connecting with us largely to get a second opinion or to get opinion on matters other than career choice.
- Some of our advisees had had some sort of past involvement with MIRI/CFAR/FHI. Some were seriously considering working in existential risk reduction or on artificial intelligence. The two subsets overlapped considerably.
- Our advisees were somewhat better educated about rationality issues than we'd expect others of similar academic accomplishment to be, and more than the advisees we got from sources other than LessWrong. That's obviously not a surprise at all.
- We hadn't been expecting it, but many advisees asked us questions related to procrastination, social skills, and other life skills. We were initially somewhat ill-equipped to handle these, but we've built a base of recommendations, with some help from LessWrong and other sources.
- One thing that surprised me personally is that many of these people had never spent time exploring Quora. I'd have expected Quora to be much more widely known and used by the sort of people who were sufficiently aware of the Internet to know LessWrong. But it's possible there's not that much overlap.
My overall takeaway is that LessWrong seems to still be one of the foremost places that smart and curious young people interested in epistemic rationality visit. I'm not sure of the exact reason, though HPMOR probably gets a significant fraction of the credit. As long as things stay this way, LessWrong remains a great way to influence a subset of the young population today that's likely to be disproportionately represented among the decision-makers a few years down the line.
It's not clear to me why they don't participate more actively on LessWrong. Maybe no special reasons are needed: the ratio of lurkers to posters is huge for most Internet fora. Maybe the people who contacted us were relatively young and still didn't have an Internet presence, or were being careful about building one. On the other hand, maybe there is something about the comments culture that dissuades people from participating (this need not be a bad feature per se: one reason people may refrain from participating is that comments are held to a high bar and this keeps people from offering off-the-cuff comments). That said, if people could somehow participate more, LessWrong could transform itself into an interactive forum for smart and curious people that's head and shoulders above all the others.
PS: We've now made our information wiki publicly accessible. It's still in beta and a lot of content is incomplete and there are links to as-yet-uncreated pages all over the place. But we think it might still be interesting to the LessWrong audience.
When I was a freshman in high school, I was a mediocre math student: I earned a D in second semester geometry and had to repeat the course. By the time I was a senior in high school, I was one of the strongest few math students in my class of ~600 students at an academic magnet high school. I went on to earn a PhD in math. Most people wouldn't have guessed that I could have improved so much, and the shift that occurred was very surreal to me. It’s all the more striking in that the bulk of the shift occurred in a single year. I thought I’d share what strategies facilitated the change.
I've recently had the bad luck of having numerous people close to me die. Though I've wanted to contribute to anti-aging and anti-death research for a while, I'm only now in the position of being stable and materially well-off enough to throw around semi-serious cash.
Who should I donate to? I don't want to do anything with cryonics yet; I haven't given cryonics enough thought to be convinced it'd be worth the money. But I was considering the Methuselah foundation.
Reply to: Benja2010's Self-modification is the correct justification for updateless decision theory; Wei Dai's Late great filter is not bad news
"P-zombie" is short for "philosophical zombie", but here I'm going to re-interpret it as standing for "physical philosophical zombie", and contrast it to what I call an "l-zombie", for "logical philosophical zombie".
A p-zombie is an ordinary human body with an ordinary human brain that does all the usual things that human brains do, such as the things that cause us to move our mouths and say "I think, therefore I am", but that isn't conscious. (The usual consensus on LW is that p-zombies can't exist, but some philosophers disagree.) The notion of p-zombie accepts that human behavior is produced by physical, computable processes, but imagines that these physical processes don't produce conscious experience without some additional epiphenomenal factor.
An l-zombie is a human being that could have existed, but doesn't: a Turing machine which, if anybody ever ran it, would compute that human's thought processes (and its interactions with a simulated environment); that would, if anybody ever ran it, compute the human saying "I think, therefore I am"; but that never gets run, and therefore isn't conscious. (If it's conscious anyway, it's not an l-zombie by this definition.) The notion of l-zombie accepts that human behavior is produced by computable processes, but supposes that these computational processes don't produce conscious experience without being physically instantiated.
Actually, there probably aren't any l-zombies: The way the evidence is pointing, it seems like we probably live in a spatially infinite universe where every physically possible human brain is instantiated somewhere, although some are instantiated less frequently than others; and if that's not true, there are the "bubble universes" arising from cosmological inflation, the branches of many-worlds quantum mechanics, and Tegmark's "level IV" multiverse of all mathematical structures, all suggesting again that all possible human brains are in fact instantiated. But (a) I don't think that even with all that evidence, we can be overwhelmingly certain that all brains are instantiated; and, more importantly actually, (b) I think that thinking about l-zombies can yield some useful insights into how to think about worlds where all humans exist, but some of them have more measure ("magical reality fluid") than others.
So I ask: Suppose that we do indeed live in a world with l-zombies, where only some of all mathematically possible humans exist physically, and only those that do have conscious experiences. How should someone living in such a world reason about their experiences, and how should they make decisions — keeping in mind that if they were an l-zombie, they would still say "I have conscious experiences, so clearly I can't be an l-zombie"?
If we can't update on our experiences to conclude that someone having these experiences must exist in the physical world, then we must of course conclude that we are almost certainly l-zombies: After all, if the physical universe isn't combinatorially large, the vast majority of mathematically possible conscious human experiences are not instantiated. You might argue that the universe you live in seems to run on relatively simple physical rules, so it should have high prior probability; but we haven't really figured out the exact rules of our universe, and although what we understand seems compatible with the hypothesis that there are simple underlying rules, that's not really proof that there are such underlying rules, if "the real universe has simple rules, but we are l-zombies living in some random simulation with a hodgepodge of rules (that isn't actually ran)" has the same prior probability; and worse, if you don't have all we do know about these rules loaded into your brain right now, you can't really verify that they make sense, since there is some mathematically possible simulation whose initial state has you remember seeing evidence that such simple rules exist, even if they don't; and much worse still, even if there are such simple rules, what evidence do you have that if these rules were actually executed, they would produce you? Only the fact that you, like, exist, but we're asking what happens if we don't let you update on that.
I find myself quite unwilling to accept this conclusion that I shouldn't update, in the world we're talking about. I mean, I actually have conscious experiences. I, like, feel them and stuff! Yes, true, my slightly altered alter ego would reason the same way, and it would be wrong; but I'm right...
...and that actually seems to offer a way out of the conundrum: Suppose that I decide to update on my experience. Then so will my alter ego, the l-zombie. This leads to a lot of l-zombies concluding "I think, therefore I am", and being wrong, and a lot of actual people concluding "I think, therefore I am", and being right. All the thoughts that are actually consciously experienced are, in fact, correct. This doesn't seem like such a terrible outcome. Therefore, I'm willing to provisionally endorse the reasoning "I think, therefore I am", and to endorse updating on the fact that I have conscious experiences to draw inferences about physical reality — taking into account the simulation argument, of course, and conditioning on living in a small universe, which is all I'm discussing in this post.
NB. There's still something quite uncomfortable about the idea that all of my behavior, including the fact that I say "I think therefore I am", is explained by the mathematical process, but actually being conscious requires some extra magical reality fluid. So I still feel confused, and using the word l-zombie in analogy to p-zombie is a way of highlighting that. But this line of reasoning still feels like progress. FWIW.
But if that's how we justify believing that we physically exist, that has some implications for how we should decide what to do. The argument is that nothing very bad happens if the l-zombies wrongly conclude that they actually exist. Mostly, that also seems to be true if they act on that belief: mostly, what l-zombies do doesn't seem to influence what happens in the real world, so if only things that actually happen are morally important, it doesn't seem to matter what the l-zombies decide to do. But there are exceptions.
Consider the counterfactual mugging: Accurate and trustworthy Omega appears to you and explains that it just has thrown a very biased coin that had only a 1/1000 chance of landing heads. As it turns out, this coin has in fact landed heads, and now Omega is offering you a choice: It can either (A) create a Friendly AI or (B) destroy humanity. Which would you like? There is a catch, though: Before it threw the coin, Omega made a prediction about what you would do if the coin fell heads (and it was able to make a confident prediction about what you would choose). If the coin had fallen tails, it would have created an FAI if it has predicted that you'd choose (B), and it would have destroyed humanity if it has predicted that you would choose (A). (If it hadn't been able to make a confident prediction about what you would choose, it would just have destroyed humanity outright.)
There is a clear argument that, if you expect to find yourself in a situation like this in the future, you would want to self-modify into somebody who would choose (B), since this gives humanity a much larger chance of survival. Thus, a decision theory stable under self-modification would answer (B). But if you update on the fact that you consciously experience Omega telling you that the coin landed heads, (A) would seem to be the better choice!
One way of looking at this is that if the coin falls tails, the l-zombie that is told the coin landed heads still exists mathematically, and this l-zombie now has the power to influence what happens in the real world. If the argument for updating was that nothing bad happens even though the l-zombies get it wrong, well, that argument breaks here. The mathematical process that is your mind doesn't have any evidence about whether the coin landed heads or tails, because as a mathematical object it exists in both possible worlds, and it has to make a decision in both worlds, and that decision affects humanity's future in both worlds.
Back in 2010, I wrote a post arguing that yes, you would want to self-modify into something that would choose (B), but that that was the only reason why you'd want to choose (B). Here's a variation on the above scenario that illustrates the point I was trying to make back then: Suppose that Omega tells you that it actually threw its coin a million years ago, and if it had fallen tails, it would have turned Alpha Centauri purple. Now throughout your history, the argument goes, you would never have had any motive to self-modify into something that chooses (B) in this particular scenario, because you've always known that Alpha Centauri isn't, in fact, purple.
But this argument assumes that you know you're not a l-zombie; if the coin had in fact fallen tails, you wouldn't exist as a conscious being, but you'd still exist as a mathematical decision-making process, and that process would be able to influence the real world, so you-the-decision-process can't reason that "I think, therefore I am, therefore the coin must have fallen heads, therefore I should choose (A)." Partly because of this, I now accept choosing (B) as the (most likely to be) correct choice even in that case. (The rest of my change in opinion has to do with all ways of making my earlier intuition formal getting into trouble in decision problems where you can influence whether you're brought into existence, but that's a topic for another post.)
However, should you feel cheerful while you're announcing your choice of (B), since with high (prior) probability, you've just saved humanity? That would lead to an actual conscious being feeling cheerful if the coin has landed heads and humanity is going to be destroyed, and an l-zombie computing, but not actually experiencing, cheerfulness if the coin has landed heads and humanity is going to be saved. Nothing good comes out of feeling cheerful, not even alignment of a conscious' being's map with the physical territory. So I think the correct thing is to choose (B), and to be deeply sad about it.
You may be asking why I should care what the right probabilities to assign or the right feelings to have are, since these don't seem to play any role in making decisions; sometimes you make your decisions as if updating on your conscious experience, but sometimes you don't, and you always get the right answer if you don't update in the first place. Indeed, I expect that the "correct" design for an AI is to fundamentally use (more precisely: approximate) updateless decision theory (though I also expect that probabilities updated on the AI's sensory input will be useful for many intermediate computations), and "I compute, therefore I am"-style reasoning will play no fundamental role in the AI. And I think the same is true for humans' decisions — the correct way to act is given by updateless reasoning. But as a human, I find myself unsatisfied by not being able to have a picture of what the physical world probably looks like. I may not need one to figure out how I should act; I still want one, not for instrumental reasons, but because I want one. In a small universe where most mathematically possible humans are l-zombies, the argument in this post seems to give me a justification to say "I think, therefore I am, therefore probably I either live in a simulation or what I've learned about the laws of physics describes how the real world works (even though there are many l-zombies who are thinking similar thoughts but are wrong about them)."
And because of this, even though I disagree with my 2010 post, I also still disagree with Wei Dai's 2010 post arguing that a late Great Filter is good news, which my own 2010 post was trying to argue against. Wei argued that if Omega gave you a choice between (A) destroying the world now and (B) having Omega destroy the world a million years ago (so that you are never instantiated as a conscious being, though your choice as an l-zombie still influences the real world), then you would choose (A), to give humanity at least the time it's had so far. Wei concluded that this means that if you learned that the Great Filter is in our future, rather than our past, that must be good news, since if you could choose where to place the filter, you should place it in the future. I now agree with Wei that (A) is the right choice, but I don't think that you should be happy about it. And similarly, I don't think you should be happy about news that tells you that the Great Filter is later than you might have expected.
In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.
-- Lord Kelvin
If you believe that science is about describing things mathematically, you can fall into a strange sort of trap where you come up with some numerical quantity, discover interesting facts about it, use it to analyze real-world situations - but never actually get around to measuring it. I call such things "theoretical quantities" or "fake numbers", as opposed to "measurable quantities" or "true numbers".
An example of a "true number" is mass. We can measure the mass of a person or a car, and we use these values in engineering all the time. An example of a "fake number" is utility. I've never seen a concrete utility value used anywhere, though I always hear about nice mathematical laws that it must obey.
The difference is not just about units of measurement. In economics you can see fake numbers happily coexisting with true numbers using the same units. Price is a true number measured in dollars, and you see concrete values and graphs everywhere. "Consumer surplus" is also measured in dollars, but good luck calculating the consumer surplus of a single cheeseburger, never mind drawing a graph of aggregate consumer surplus for the US! If you ask five economists to calculate it, you'll get five different indirect estimates, and it's not obvious that there's a true number to be measured in the first place.
Another example of a fake number is "complexity" or "maintainability" in software engineering. Sure, people have proposed different methods of measuring it. But if they were measuring a true number, I'd expect them to agree to the 3rd decimal place, which they don't :-) The existence of multiple measuring methods that give the same result is one of the differences between a true number and a fake one. Another sign is what happens when two of these methods disagree: do people say that they're both equally valid, or do they insist that one must be wrong and try to find the error?
It's certainly possible to improve something without measuring it. You can learn to play the piano pretty well without quantifying your progress. But we should probably try harder to find measurable components of "intelligence", "rationality", "productivity" and other such things, because we'd be better at improving them if we had true numbers in our hands.
I'm struggling to understand anything technical on this website. I've enjoyed reading the sequences, and they have given me a lot to thing about. Still, I've read the introduction to Bayes theorem multiple times, and I simply can't grasp it. Even starting at the very beginning of the sequences I quickly get lost because there are references to programming and cognitive science which I simply do not understand.
Thinking about it, I realized that this might be a common concern. There are probably plenty of people who've looked at various more-or-less technical or jargony Less Wrong posts, tried understanding them, and then given up (without posting a comment explaining their confusion).
So I figured that it might be good to have a thread where you can ask for explanations for any Less Wrong post that you didn't understand and would like to, but don't want to directly comment on for any reason (e.g. because you're feeling embarassed, because the post is too old to attract much traffic, etc.). In the spirit of various Stupid Questions threads, you're explicitly encouraged to ask even for the kinds of explanations that you feel you "should" get even yourself, or where you feel like you could get it if you just put in the effort (but then never did).
You can ask to have some specific confusing term or analogy explained, or to get the main content of a post briefly summarized in plain English and without jargon, or anything else. (Of course, there are some posts that simply cannot be explained in non-technical terms, such as the ones in the Quantum Mechanics sequence.) And of course, you're encouraged to provide explanations to others!
Ever since Tversky and Kahneman started to gather evidence purporting to show that humans suffer from a large number of cognitive biases, other psychologists and philosophers have criticized these findings. For instance, philosopher L. J. Cohen argued in the 80's that there was something conceptually incoherent with the notion that most adults are irrational (with respect to a certain problem). By some sort of Wittgensteinian logic, he thought that the majority's way of reasoning is by definition right. (Not a high point in the history of analytic philosophy, in my view.) See chapter 8 of this book (where Gigerenzer, below, is also discussed).
Another attempt to resurrect human rationality is due to Gerd Gigerenzer and other psychologists. They have a) shown that if you tweak some of the heuristics and biases (i.e. the research program led by Tversky and Kahneman) experiments but a little - for instance by expressing probabilities in terms of frequencies - people make much fewer mistakes and b) argued, on the back of this, that the heuristics we use are in many situations good (and fast and frugal) rules of thumb (which explains why they are evolutionary adaptive). Regarding this, I don't think that Tversky and Kahneman ever doubted that the heuristics we use are quite useful in many situations. Their point was rather that there are lots of naturally occuring set-ups which fool our fast and frugal heuristics. Gigerenzer's findings are not completely uninteresting - it seems to me he does nuance the thesis of massive irrationality a bit - but his claims to the effect that these heuristics are rational in a strong sense are wildly overblown in my opnion. The Gigerenzer vs. Tversky/Kahneman debates are well discussed in this article (although I think they're too kind to Gigerenzer).
A strong argument against attempts to save human rationality is the argument from individual differences, championed by Keith Stanovich. He argues that the fact that some intelligent subjects consistently avoid to fall prey to the Wason Selection task, the conjunction fallacy, and other fallacies, indicates that there is something misguided with the notion that the answer that psychologists traditionally has seen as normatively correct is in fact misguided.
Hence I side with Tversky and Kahneman in this debate. Let me just mention one interesting and possible succesful method for disputing some supposed biases. This method is to argue that people have other kinds of evidence than the standard interpretation assumes, and that given this new interpretation of the evidence, the supposed bias in question is in fact not a bias. For instance, it has been suggested that the "false consensus effect" can be re-interpreted in this way:
The False Consensus Effect
Bias description: People tend to imagine that everyone responds the way they do. They tend to see their own behavior as typical. The tendency to exaggerate how common one’s opinions and behavior are is called the false consensus effect. For example, in one study, subjects were asked to walk around on campus for 30 minutes, wearing a sign board that said "Repent!". Those who agreed to wear the sign estimated that on average 63.5% of their fellow students would also agree, while those who disagreed estimated 23.3% on average.
Counterclaim (Dawes & Mulford, 1996): The correctness of reasoning is not estimated on the basis of whether or not one arrives at the correct result. Instead, we look at whether reach reasonable conclusions given the data they have. Suppose we ask people to estimate whether an urn contains more blue balls or red balls, after allowing them to draw one ball. If one person first draws a red ball, and another person draws a blue ball, then we should expect them to give different estimates. In the absence of other data, you should treat your own preferences as evidence for the preferences of others. Although the actual mean for people willing to carry a sign saying "Repent!" probably lies somewhere in between of the estimates given, these estimates are quite close to the one-third and two-thirds estimates that would arise from a Bayesian analysis with a uniform prior distribution of belief. A study by the authors suggested that people do actually give their own opinion roughly the right amount of weight.
(The quote is from an excellent Less Wrong article on this topic due to Kaj Sotala. See also this post by him, this by Andy McKenzie, this by Stuart Armstrong and this by lukeprog on this topic. I'm sure there are more that I've missed.)
It strikes me that the notion that people are "massively flawed" is something of an intellectual cornerstone of the Less Wrong community (e.g. note the names "Less Wrong" and "Overcoming Bias"). In the light of this it would be interesting to hear what people have to say about the rationality wars. Do you all agree that people are massively flawed?
Let me make two final notes to keep in mind when discussing these issues. Firstly, even though the heuristics and biases program is sometimes seen as pessimistic, one could turn the tables around: if they're right, we should be able to improve massively (even though Kahneman himself seems to think that that's hard to do in practice). I take it that CFAR and lots of LessWrongers who attempt to "refine their rationality" assume that this is the case. On the other hand, if Gigerenzer or Cohen are right, and we already are very rational, then it would seem that it is hard to do much better. So in a sense the latter are more pessimistic (and conservative) than the former.
Secondly, note that parts of the rationality wars seem to be merely verbal and revolve around how "rationality" is to be defined (tabooing this word is very often a good idea). The real question is not if the fast and frugal heuristics are in some sense rational, but whether there are other mental algorithms which are more reliable and effective, and whether it is plausible to assume that we could learn to use them on a large scale instead.
by Patrick Brinich-Langlois and Ozzie Gooen
Communities once kept our ancestors from being torn apart by mountain lions and tyrannosaurus rexes. Dinosaur violence has declined greatly since the Cretaceous, but the world has become more complex and interconnected. Communities remain essential.
Effective altruists have a lot to offer one another. But we're geographically dispersed, so it's hard to know whom to ask for help. Skillshare.im is built to fix this.
Skillshare.im is a place for effective altruists to share their skills, items, and couches with one another.
Offer skills or things that you're willing to share. Request items that other people have offered. Here are a few things people have offered on the site:
- access to academic papers
- advice on fundraising, careers, nutrition, productivity, startups, investments, etc.
- French translation (two people!)
- math tutoring
- lodging in Switzerland, the Bay Area, London, Melbourne, and Oxford
As of this writing, we already have 59 offers from 55 people. With your help, we can make it 60 offers from 56 people!
Why use Skillshare.im, instead of getting the things you need the normal way? Certain things, like career advice or study buddies, can be hard to get. Even if you can find someone who has what you're looking for, you might enjoy the opportunity to relationships with other altruists. Plus, by participating in Skillshare.im, you show that the community of do-gooders is welcoming and supportive, qualities that may draw in new people.
You can be notified of new offers and requests by Twitter or RSS. As with all .impact software, the source code is available on GitHub. We use a publicly accessible Trello board to track bugs and features.
We'd love to hear what you think about the site. Is it awesome, or a horrifically inefficient use of our resources? What could be improved? Send us an e-mail or leave a comment.
Or, “how not to make a fundamental attribution error on yourself;” or, “how to do that thing that you keep being frustrated at yourself for not doing;” or, “finding and solving trivial but leveraged inconveniences.”
I've been travelling around the US for the past month since arriving from Australia, and have had the chance to see how a number of different Less Wrong communities operate. As a departing organiser for the Melbourne Less Wrong community, it has been interesting to make comparisons between the different Less Wrong groups all over the US, and I suspect sharing the lessons learned by different communities will benefit the global movement.
For aspiring organisers, or leaders looking at making further improvements to their community, there already exists an excellent meetup organisers handbook, list of meetups, and NYC case study. I'd also recommend one super useful ability: rapid experimentation. This is a relatively low cost way to find out exactly what format of events attracts the most people and are the most beneficial. Once you know how to win, spam it! This ability is sometimes even better than just asking people what they want out of the community, but you should probably do both.
I'll summarise a few types of meetup that I have seen here. Please feel free to help out by adding descriptions of other types of events you have seen, or variations on the ones already posted if you think there is something other communities could learn.
Public Practical Rationality Meetups (Melbourne)
Held monthly on a Friday in Matthew Fallshaw's offices at TrikeApps. Advertised on Facebook, LessWrong, and the Melbourne LW Mailing List. About 25-40 attendees. Until January, were also advertised publicly on meetup.com, but since then the format has changed significantly. Audience was 50% Less Wrongers, and 50% newcomers, so this served as our outreach event.
6:30pm-7:30pm Doors open, usually most people arrive around 7:15pm
7:30pm sharp-9:00pm: Content introduced. Usually around 3 topics have been prepared by 3 separate Less Wrongers, for discussion in groups of about 10 people each. After 30 minutes the groups rotate, so the presenters present the same thing multiple times. Topics have included: effective communication, giving and receiving feedback, sequence summaries, cryonics, habit formation, etc.
9:00pm - Late: Unstructured socialising, with occasional 'rationality therapy' where a few friends get together to think about a particular issue in someone's life in detail. Midnight souvlaki runs are a tradition.
Monthly Social Games Meetup (Melbourne)
Held in a private residence on a Friday, close to central city public transport. Advertised on Facebook, LessWrong, and the Melbourne LW Mailing List. About 15-25 attendees. Snacks provided by the host.
6:30pm - Late: People show up whenever and there are lots of great conversations. Mafia, (science themed) Zendo, and a variety of board games are popular, but the majority of the night is usually spent talking about what people have learned or read recently. There are enough discussions happening that it is usually easy to find an interesting group to join. Delivery dinner is often ordered, and many people stay quite late.
Large public salons (from Rafael Cosman, Stanford University)
Held on campus in a venue provided by the university. Advertised on a custom mailing list, and presumably facebook/word of mouth. Audience is mostly unfamiliar with Less Wrong Material, and this event is has not yet officially become associated with Less Wrong, but Rafael is in the process of getting a spin-off LW specific meetup happening.
7pm-7:30pm: Guests trickle in. Light background music helps inform the first arrivals that they are indeed at the right place.
7:30pm-7:45pm: Introductions, covering 1. Who you are 2. One thing that people should talk to you about (e.g. "You should talk to me about Conway's Game of Life" 3. One thing that people could come and do with you sometime (e.g. "Come and join me for yoga on Sunday mornings"
7:45pm-9:30pm: Short talks on a variety of topics. At the end of a presentation, instead of tossing it open for questions, everyone comes up to give the speaker a high-five, and then the group immediately enters unstructured discussion for 5-10 minutes. This allows people with pressing questions to go up and ask the speaker, but also allows everyone else to break out to mingle rather than being passive.
Still to come: New York, Austin, and the SF East and South Bay meetup formats.
A brief essay intended for high school students: any thoughts?
If you go to school, take the classes that people tell you to, do your homework, and engage in the extracurricular activities that your peers do, you'll be setting yourself up for an "okay" life. But you can do better than that.
This post is to raise a question about the demographics of rationality: Is rationality something that can appeal to low-IQ people as well?
I don't mean in theory, I mean in practice. From what I've seen, people who are concerned about rationality (in the sense that it has on LW, OvercomingBias, etc.) are overwhelmingly high-IQ.
Meanwhile, HPMOR and other stories in the "rationality genre" appeal to me, and to other people I know. However I wonder: Perhaps part of the reason they appeal to me is that I think of myself as a smart person, and this allows me to identify with the main characters, cheer when they think their way to victory, etc. If I thought of myself as a stupid person, then perhaps I would feel uncomfortable, insecure, and alienated while reading the same stories.
So, I have four questions:
1.) Do we have reason to believe that the kind of rationality promoted on LW, OvercomingBias, CFAR, etc. appeals to a fairly normal distribution of people around the IQ mean? Or should we think, as I suggested, that people with lower IQ's are disposed to find the idea of being rational less attractive?
2.) Ditto, except replace "being rational" with "celebrating rationality through stories like HPMOR." Perhaps people think that rationality is a good thing in much the same way that being wealthy is a good thing, but they don't think that it should be celebrated, or at least they don't find such celebrations appealing.
3.) Supposing #1 and #2 have the answers I am suggesting, why?
4.) Making the same supposition, what are the implications for the movement in general?
Note: I chose to use IQ in this post instead of a more vague term like "intelligence," but I could easily have done the opposite. I'm happy to do whichever version is less problematic.
What can we learn about science from the divide during the Cold War?
I have one example in mind: America held that coal and oil were fossil fuels, the stored energy of the sun, while the Soviets held that they were the result of geologic forces applied to primordial methane.
At least one side is thoroughly wrong. This isn't a politically charged topic like sociology, or even biology, but a physical science where people are supposed to agree on the answers. This isn't a matter of research priorities, where one side doesn't care enough to figure things out, but a topic that both sides saw to be of great importance, and where they both claimed to apply their theories. On the other hand, Lysenkoism seems to have resulted from the practical importance of crop breeding.
First of all, this example supports the claim that there really was a divide, that science was disconnected into two poorly communicating camps. It suggests that when the two sides reached the same results on other topics, they did so independently. Even if we cannot learn from this example, it suggests that we may be able to learn from other consequences of dividing the scientific community.
My understanding is that although some Russian language research papers were available in America, they were completely ignored and the scientists failed to even acknowledge that there was a community with divergent opinions. I don't know about the other direction.
- Are there other topics, ideally in physical science, on which such a substantial disagreement persisted for decades? not necessarily between these two parties?
- Did the Soviet scientists know that their American counterpoints disagreed?
- Did Warsaw Pact (eg, Polish) scientists generally agree with the Soviets about the origin of coal and oil? Were they aware of the American position? Did other Western countries agree with America? How about other countries, such as China and Japan?
- What are the current Russian beliefs about coal and oil? I tried running Russian Wikipedia through google translate and it seemed to support the biogenic theory. (right?) Has there been a reversal among Russian scientists? When? Or does Wikipedia represent foreign opinion? If a divide remains, does it follow the Iron Curtain, or some new line?
- Have I missed some detail that would make me not classify this as an honest disagreement between two scientific establishments?
- Finally, the original question: what can we learn about the institution of science?
Here's an account by a retired engineer of what happened when his old company wanted to streamline a process in the factory where he used to work.
People only knew how to keep the factory going from one day to the next, but all the documentation was lost-- the factory had been sold a couple of times, and efforts at digitization caused more to get lost. Even the name of the factory had been lost.
Fortunately, engineers keep more documentation than their bosses allow them to. (Trade secrets!) And they don't throw the documentation away just because they've retired.
I've been concerned about infrastructure neglect for a while, and this makes me more concerned. On the other hand, instead of just viewing with alarm, I'd like to view with quantified alarm, and I don't have the foggiest on how to quantify the risks.
Also, some of the information loss is a result of a search for efficiency. How can you tell when you're leaving something important out?
Given LW’s keen interest in bias, it would seem pertinent to be aware of the biases engendered by the karma system. Note: I used to be strictly opposed to comment scoring mechanisms, but witnessing the general effectiveness in which LWers use karma has largely redeemed the system for me.
In “Social Influence Bias: A Randomized Experiment” by Muchnik et al, random comments on a “social news aggregation Web site” were up-voted after being posted. The likelihood of such rigged comments receiving additional up-votes were quantified in comparison to a control group. The results show that users were significantly biased towards the randomly up-voted posts:
The up-vote treatment significantly increased the probability of up-voting by the first viewer by 32% over the control group ... Uptreated comments were not down-voted significantly more or less frequently than the control group, so users did not tend to correct the upward manipulation. In the absence of a correction, positive herding accumulated over time.
At the end of their five month testing period, the comments that had artificially received an up-vote had an average rating 25% higher than the control group. Interestingly, the severity of the bias was largely dependent on the topic of discussion:
We found significant positive herding effects for comment ratings in “politics,” “culture and society,” and “business,” but no detectable herding behavior for comments in “economics,” “IT,” “fun,” and “general news”.
The herding behavior outlined in the paper seems rather intuitive to me. If before I read a post, I see a little green ‘1’ next to it, I’m probably going to read the post in a better light than if I hadn't seen that little green ‘1’ next to it. Similarly, if I see a post that has a negative score, I’ll probably see flaws in it much more readily. One might say that this is the point of the rating system, as it allows the group as a whole to evaluate the content. However, I’m still unsettled by just how easily popular opinion was swayed in the experiment.
This certainly doesn't necessitate that we reprogram the site and eschew the karma system. Moreover, understanding the biases inherent in such a system will allow us to use it much more effectively. Discussion on how this bias affects LW in particular would be welcomed. Here are some questions to begin with:
- Should we worry about this bias at all? Are its effects negligible in the scheme of things?
- How does the culture of LW contribute to this herding behavior? Is it positive or negative?
- If there are damages, how can we mitigate them?
In the paper, they mentioned that comments were not sorted by popularity, therefore “mitigating the selection bias.” This of course implies that the bias would be more severe on forums where comments are sorted by popularity, such as this one.
For those interested, another enlightening paper is “Overcoming the J-shaped distribution of product reviews” by Nan Hu et al, which discusses rating biases on websites such as amazon. User gwern has also recommended a longer 2007 paper by the same authors which the one above is based upon: "Why do Online Product Reviews have a J-shaped Distribution? Overcoming Biases in Online Word-of-Mouth Communication"
In my previous post, I introduced the idea of an "l-zombie", or logical philosophical zombie: A Turing machine that would simulate a conscious human being if it were run, but that is never run in the real, physical world, so that the experiences that this human would have had, if the Turing machine were run, aren't actually consciously experienced.
One common reply to this is to deny the possibility of logical philosophical zombies just like the possibility of physical philosophical zombies: to say that every mathematically possible conscious experience is in fact consciously experienced, and that there is no kind of "magical reality fluid" that makes some of these be experienced "more" than others. In other words, we live in the Tegmark Level IV universe, except that unlike Tegmark argues in his paper, there's no objective measure on the collection of all mathematical structures, according to which some mathematical structures somehow "exist more" than others (and, although IIRC that's not part of Tegmark's argument, according to which the conscious experiences in some mathematical structures could be "experienced more" than those in other structures). All mathematically possible experiences are experienced, and to the same "degree".
So why is our world so orderly? There's a mathematically possible continuation of the world that you seem to be living in, where purple pumpkins are about to start falling from the sky. Or the light we observe coming in from outside our galaxy is suddenly replaced by white noise. Why don't you remember ever seeing anything as obviously disorderly as that?
And the answer to that, of course, is that among all the possible experiences that get experienced in this multiverse, there are orderly ones as well as non-orderly ones, so the fact that you happen to have orderly experiences isn't in conflict with the hypothesis; after all, the orderly experiences have to be experienced as well.
One might be tempted to argue that it's somehow more likely that you will observe an orderly world if everybody who has conscious experiences at all, or if at least most conscious observers, see an orderly world. (The "most observers" version of the argument assumes that there is a measure on the conscious observers, a.k.a. some kind of magical reality fluid.) But this requires the use of anthropic probabilities, and there is simply no (known) system of anthropic probabilities that gives reasonable answers in general. Fortunately, we have an alternative: Wei Dai's updateless decision theory (which was motivated in part exactly by the problem of how to act in this kind of multiverse). The basic idea is simple (though the details do contain devils): We have a prior over what the world looks like; we have some preferences about what we would like the world to look like; and we come up with a plan for what we should do in any circumstance we might find ourselves in that maximizes our expected utility, given our prior.
In this framework, Coscott and Paul suggest, everything adds up to normality if, instead of saying that some experiences objectively exist more, we happen to care more about some experiences than about others. (That's not a new idea, of course, or the first time this has appeared on LW -- for example, Wei Dai's What are probabilities, anyway? comes to mind.) In particular, suppose we just care more about experiences in mathematically really simple worlds -- or more precisely, places in mathematically simple worlds that are mathematically simple to describe (since there's a simple program that runs all Turing machines, and therefore all mathematically possible human experiences, always assuming that human brains are computable). Then, even though there's a version of you that's about to see purple pumpkins rain from the sky, you act in a way that's best in the world where that doesn't happen, because that world has so much lower K-complexity, and because you therefore care so much more about what happens in that world.
There's something unsettling about that, which I think deserves to be mentioned, even though I do not think it's a good counterargument to this view. This unsettling thing is that on priors, it's very unlikely that the world you experience arises from a really simple mathematical description. (This is a version of a point I also made in my previous post.) Even if the physicists had already figured out the simple Theory of Everything, which is a super-simple cellular automaton that accords really well with experiments, you don't know that this simple cellular automaton, if you ran it, would really produce you. After all, imagine that somebody intervened in Earth's history so that orchids never evolved, but otherwise left the laws of physics the same; there might still be humans, or something like humans, and they would still run experiments and find that they match the predictions of the simple cellular automaton, so they would assume that if you ran that cellular automaton, it would compute them -- except it wouldn't, it would compute us, with orchids and all. Unless, of course, it does compute them, and a special intervention is required to get the orchids.
So you don't know that you live in a simple world. But, goes the obvious reply, you care much more about what happens if you do happen to live in the simple world. On priors, it's probably not true; but it's best, according to your values, if all people like you act as if they live in the simple world (unless they're in a counterfactual mugging type of situation, where they can influence what happens in the simple world even if they're not in the simple world themselves), because if the actual people in the simple world act like that, that gives the highest utility.
You can adapt an argument that I was making in my l-zombies post to this setting: Given these preferences, it's fine for everybody to believe that they're in a simple world, because this will increase the correspondence between map and territory for the people that do live in simple worlds, and that's who you care most about.
I mostly agree with this reasoning. I agree that Tegmark IV without a measure seems like the most obvious and reasonable hypothesis about what the world looks like. I agree that there seems no reason for there to be a "magical reality fluid". I agree, therefore, that on the priors that I'd put into my UDT calculation for how I should act, it's much more likely that true reality is a measureless Tegmark IV than that it has some objective measure according to which some experiences are "experienced less" than others, or not experienced at all. I don't think I understand things well enough to be extremely confident in this, but my odds would certainly be in favor of it.
Moreover, I agree that if this is the case, then my preferences are to care more about the simpler worlds, making things add up to normality; I'd want to act as if purple pumpkins are not about to start falling from the sky, precisely because I care more about the consequences my actions have in more orderly worlds.
Imagine this: Once you finish reading this article, you hear a bell ringing, and then a sonorous voice announces: "You do indeed live in a Tegmark IV multiverse without a measure. You had better deal with it." And then it turns out that it's not just you who's heard that voice: Every single human being on the planet (who didn't sleep through it, isn't deaf etc.) has heard those same words.
On the hypothesis, this is of course about to happen to you, though only in one of those worlds with high K-complexity that you don't care about very much.
So let's consider the following possible plan of action: You could act as if there is some difference between "existence" and "non-existence", or perhaps some graded degree of existence, until you hear those words and confirm that everybody else has heard them as well, or until you've experienced one similarly obviously "disorderly" event. So until that happens, you do things like invest time and energy into trying to figure out what the best way to act is if it turns out that there is some magical reality fluid, and into trying to figure out what a non-confused version of something like a measure on conscious experience could look like, and you act in ways that don't kill you if we happen to not live in a measureless Tegmark IV. But once you've had a disorderly experience, just a single one, you switch over to optimizing for the measureless mathematical multiverse.
If the degree to which you care about worlds is really proportional to their K-complexity, with respect to what you and I would consider a "simple" universal Turing machine, then this would be a silly plan; there is very little to be gained from being right in worlds that have that much higher K-complexity. But when I query my intuitions, it seems like a rather good plan:
- Yes, I care less about those disorderly worlds. But not as much less as if I valued them by their K-complexity. I seem to be willing to tap into my complex human intuitions to refer to the notion of "single obviously disorderly event", and assign the worlds with a single such event, and otherwise low K-complexity, not that much lower importance than the worlds with actual low K-complexity.
- And if I imagine that the confused-seeming notions of "really physically exists" and "actually experienced" do have some objective meaning independent of my preferences, then I care much more about the difference between "I get to 'actually experience' a tomorrow" and "I 'really physically' get hit by a car today" than I care about the difference between the world with true low K-complexity and the worlds with a single disorderly event.
In other words, I agree that on the priors I put into my UDT calculation, it's much more likely that we live in measureless Tegmark IV; but my confidence in this isn't extreme, and if we don't, then the difference between "exists" and "doesn't exist" (or "is experienced a lot" and "is experienced only infinitesimally") is very important; much more important than the difference between "simple world" and "simple world plus one disorderly event" according to my preferences if we do live in a Tegmark IV universe. If I act optimally according to the Tegmark IV hypothesis in the latter worlds, that still gives me most of the utility that acting optimally in the truly simple worlds would give me -- or, more precisely, the utility differential isn't nearly as large as if there is something else going on, and I should be doing something about it, and I'm not.
This is the reason why I'm trying to think seriously about things like l-zombies and magical reality fluid. I mean, I don't even think that these are particularly likely to be exactly right even if the measureless Tegmark IV hypothesis is wrong; I expect that there would be some new insight that makes even more sense than Tegmark IV, and makes all the confusion go away. But trying to grapple with the confused intuitions we currently have seems at least a possible way to make progress on this, if it should be the case that there is in fact progress to be made.
Here's one avenue of investigation that seems worthwhile to me, and wouldn't without the above argument. One thing I could imagine finding, that could make the confusion go away, would be that the intuitive notion of "all possible Turing machines" is just wrong, and leads to outright contradictions (e.g., to inconsistencies in Peano Arithmetic, or something similarly convincing). Lots of people have entertained the idea that concepts like the real numbers don't "really" exist, and only the behavior of computable functions is "real"; perhaps not even that is real, and true reality is more restricted? (You can reinterpret many results about real numbers as results about computable functions, so maybe you could reinterpret results about computable functions as results about these hypothetical weaker objects that would actually make mathematical sense.) So it wouldn't be the case after all that there is some Turing machine that computes the conscious experiences you would have if pumpkins started falling from the sky.
Does the above make sense? Probably not. But I'd say that there's a small chance that maybe yes, and that if we understood the right kind of math, it would seem very obvious that not all intuitively possible human experiences are actually mathematically possible (just as obvious as it is today, with hindsight, that there is no Turing machine which takes a program as input and outputs whether this program halts). Moreover, it seems plausible that this could have consequences for how we should act. This, together with my argument above, make me think that this sort of thing is worth investigating -- even if my priors are heavily on the side of expecting that all experiences exist to the same degree, and ordinarily this difference in probabilities would make me think that our time would be better spent on investigating other, more likely hypotheses.
Leaving aside the question of how I should act, though, does all of this mean that I should believe that I live in a universe with l-zombies and magical reality fluid, until such time as I hear that voice speaking to me?
I do feel tempted to try to invoke my argument from the l-zombies post that I prefer the map-territory correspondences of actually existing humans to be correct, and don't care about whether l-zombies have their map match up with the territory. But I'm not sure that I care much more about actually existing humans being correct, if the measureless mathematical multiverse hypothesis is wrong, than I care about humans in simple worlds being correct, if that hypothesis is right. So I think that the right thing to do may be to have a subjective belief that I most likely do live in the measureless Tegmark IV, as long as that's the view that seems by far the least confused -- but continue to spend resources on investigating alternatives, because on priors they don't seem sufficiently unlikely to make up for the potential great importance of getting this right.
To whoever has for the last several days been downvoting ~10 of my old comments per day:
It is possible that your intention is to discourage me from commenting on Less Wrong.
The actual effect is the reverse. My comments still end up positive on average, and I am therefore motivated to post more of them in order to compensate for the steady karma drain you are causing.
If you are mass-downvoting other people, the effect on some of them is probably the same.
To the LW admins, if any are reading:
Look, can we really not do anything about this behaviour? It's childish and stupid, and it makes the karma system less useful (e.g., for comment-sorting), and it gives bad actors a disproportionate influence on Less Wrong. It seems like there are lots of obvious things that would go some way towards helping, many of which have been discussed in past threads about this.
Failing that, can we at least agree that it's bad behaviour and that it would be good in principle to stop it or make it more visible and/or inconvenient?
Failing that, can we at least have an official statement from an LW administrator that mass-downvoting is not considered an undesirable behaviour here? I really hope this isn't the opinion of the LW admins, but as the topic has been discussed from time to time with never any admin response I've been thinking it increasingly likely that it is. If so, let's at least be honest about it.
To anyone else reading this:
If you should happen to notice that a sizeable fraction of my comments are at -1, this is probably why. (Though of course I may just have posted a bunch of silly things. I expect it happens from time to time.)
My apologies for cluttering up Discussion with this. (But not very many apologies; this sort of mass-downvoting seems to me to be one of the more toxic phenomena on Less Wrong, and I retain some small hope that eventually something may be done about it.)
On ChrisHallquist's post extolling the virtues of money, the top comment is Eliezer pointing out the lack of concrete examples. Can anyone think of any? This is not just hypothetical: if I think your suggestion is good, I will try it (and report back on how it went)
I care about health, improving personal skills (particularly: programming, writing, people skills), gaining respect (particularly at work), and entertainment (these days: primarily books and computer games). If you think I should care about something else, feel free to suggest it.
I am early-twenties programmer living in San Francisco. In the interest of getting advice useful to more than one person, I'll omit further personal details.
If your idea requires significant ongoing time commitment, that is a major negative.
View more: Next