Results of a One-Year Longitudinal Study of CFAR Alumni
By Dan from CFAR
Introduction
When someone comes to a CFAR workshop, and then goes back home, what is different for them one year later? What changes are there to their life, to how they think, to how they act?
CFAR would like to have an answer to this question (as would many other people). One method that we have been using to gather relevant data is a longitudinal study, comparing participants' survey responses from shortly before their workshop with their survey responses approximately one year later. This post summarizes what we have learned thus far, based on data from 135 people who attended workshops from February 2014 to April 2015 and completed both surveys.
The survey questions can be loosely categorized into four broad areas:
- Well-being: On the whole, is the participant's life going better than it was before the workshop?
- Personality: Have there been changes on personality dimensions which seem likely to be associated with increased rationality?
- Behaviors: Have there been increases in rationality-related skills, habits, or other behavioral tendencies?
- Productivity: Is the participant working more effectively at their job or other projects?
We chose to measure these four areas because they represent part of what CFAR hopes that its workshops accomplish, they are areas where many workshop participants would like to see changes, and they are relatively tractable to measure on a survey. There are other areas where CFAR would like to have an effect, including people's epistemics and their impact on the world, which were not a focus of this study.
We relied heavily on existing measures which have been validated and used by psychology researchers, especially in the areas of well-being and personality. These measures typically are not a perfect match for what we care about, but we expected them to be sufficiently correlated with what we care about for them to be worth using.
We found significant increases in variables in all 4 areas. A partial summary:
Well-being: increases in happiness and life satisfaction, especially in the work domain (but no significant change in life satisfaction in the social domain)
Personality: increases in general self-efficacy, emotional stability, conscientiousness, and extraversion (but no significant change in growth mindset or openness to experience)
Behaviors: increased rate of acquisition of useful techniques, emotions experienced as more helpful & less of a hindrance (but no significant change on measures of cognitive biases or useful conversations)
Productivity: increases in motivation while working and effective approaches to pursuing projects (but no significant change in income or number of hours worked)
The rest of this post is organized into three main sections. The first section describes our methodology in more detail, including the reasoning behind the longitudinal design and some information on the sample. The second section gives the results of the research, including the variables that showed an effect and the ones that did not; the results are summarized in a table at the end of that section. The third section discusses four major methodological concerns—the use of self-report measures (where respondents might just give the answer that sounds good), attrition (some people who took the pre-survey did not complete the post-survey), other sources of personal growth (people might have improved over time without attending the CFAR workshop), and regression to the mean (people may have changed after the workshop simply because they came to the workshop at an unusually high or low point)—and attempts to evaluate the extent to which these four issues may have influenced the results.
The effect of effectiveness information on charitable giving
A new working paper by economists Dean Karlan and Daniel Wood, The Effect of Effectiveness: Donor Response to Aid Effectiveness in a Direct Mail Fundraising Experiment.
The Abstract:
We test how donors respond to new information about a charity’s effectiveness. Freedom from Hunger implemented a test of its direct marketing solicitations, varying letters by whether they include a discussion of their program’s impact as measured by scientific research. The base script, used for both treatment and control, included a standard qualitative story about an individual beneficiary. Adding scientific impact information has no effect on whether someone donates, or how much, in the full sample. However, we find that amongst recent prior donors (those we posit more likely to open the mail and thus notice the treatment), large prior donors increase the likelihood of giving in response to information on aid effectiveness, whereas small prior donors decrease their giving. We motivate the analysis and experiment with a theoretical model that highlights two predictions. First, larger gift amounts, holding education and income constant, is a proxy for altruism giving (as it is associated with giving more to fewer charities) versus warm glow giving (giving less to more charities). Second, those motivated by altruism will respond positively to appeals based on evidence, whereas those motivated by warm glow may respond negatively to appeals based on evidence as it turns off the emotional trigger for giving, or highlights uncertainty in aid effectiveness.
In the experimental condition (for one of the two waves of mailings), the donors received a mailing with this information about the charity's effectiveness:
In order to know that our programs work for people like Rita, we look for more than anecdotal evidence. That is why we have coordinated with independent researchers [at Yale University] to conduct scientifically rigorous impact studies of our programs. In Peru they found that women who were offered our Credit with Education program had 16% higher profits in their businesses than those who were not, and they increased profits in bad months by 27%! This is particularly important because it means our program helped women generate more stable incomes throughout the year.
These independent researchers used a randomized evaluation, the methodology routinely used in medicine, to measure the impact of our programs on things like business growth, children's health, investment in education, and women's empowerment.
In the control condition, the mailing instead included this paragraph:
Many people would have met Rita and decided she was too poor to repay a loan. Five hungry children and a small plot of mango trees don’t count as collateral. But Freedom from Hunger knows that women like Rita are ready to end hunger in their own families and in their communities.
Practical Benefits of Rationality (LW Census Results)
by Dan from CFAR
Abstract: Two measures of the practical benefits of rationality, one a self-report of the benefits of being part of the rationality community and the other a measure of how often a person adds useful techniques to their repertoire, were included on the 2013 Less Wrong survey. In-person involvement with LW/CFAR predicted both measures of benefits, with friendships with LWers and attending a CFAR workshop showing the strongest and most consistent effects. Online Less Wrong participation and background had weaker and less consistent effects. Growth mindset also independently predicted both measures of practical benefits, and on the measure of technique acquisition there was an interaction effect suggesting that in-person LW/CFAR involvement may be especially beneficial for people high in growth mindset. However, some caution is warranted in interpreting these correlational, self-report results.
Introduction
Though I first found Less Wrong through my habit of reading interesting blogs, the main reason why I've gotten more and more involved in the rationality community is my suspicion that this rationality stuff might be pretty useful. Useful not only for thinking clearly about tricky intellectual topics, but also in ways that have more directly practical benefits.
CFAR obviously has similar interests, as it aims to create a community of people who are effective at acting in the world.
The 2013 LW census/survey provided an opportunity for us to probe how the rationality community is doing so far at finding these practical benefits, as it allowed us to survey a large cross section of the Less Wrong community. Unfortunately, there is not a standard, simple measure of practical benefits which we could just stick on the survey, and we were only able to use a correlational research design, but we sought to get some relevant information by coming up with two self-report questions to include on the survey.
One question was somewhat broader than the set of practical benefits that we were interested in and the other was somewhat narrower. First, there was a broad self-report question asking people how much they had benefited from being involved in the rationality community. Second, we asked people more narrowly how often they successfully added a useful technique or approach to their repertoire. We were primarily interested in seeing whether involvement in the LW community would predict practical benefits on these two measures, and (if so) which forms of involvement would have the strongest relationship to these benefits.
About 1400 people answered the relevant survey questions, including about 400 who have read the sequences, about 150 who regularly attend LW meetups, about 100 who have attended a full CFAR workshop, about 100 who interact with other LWers in person all the time, and about 50 who met a romantic partner through LW. The survey also included a brief scale measuring growth mindset, and a question about age.
Some methodological notes: In the body of this post I’ve tried to put the results in a format that’s relatively straightforward to interpret. More technical details and additional analyses are included in footnotes, and I can add more details in the comments. Note that the study design is entirely correlational, and the questions are all self-report (unlike last year’s questions, which included tests of standard biases). This gives some reason for caution in interpreting the results, and I’ll note some places where that is especially relevant.
Background & Survey Design
The simple, obvious thing to do, in order to investigate how much people have benefited from their involvement in the rationality community, is to ask them that question. So we did: "How much have you benefited from your exposure to and participation in the rationality community (Less Wrong, CFAR, in-person contact with LW/HPMOR readers, etc.)?" There were 7 response options, which we can scale as -3 to +3, where +3 is “My life is MUCH BETTER than it would have been without exposure to the rationality community” (and -3 is “... MUCH WORSE…”).
This straightforward question has a couple of straightforward limitations. For one, we might expect people who are involved in almost any activity to say that they benefit from it; self-reported benefit does not necessarily indicate actual benefit. Second, it could include a broad range of benefits, some of which might not have much to do with the usefulness of rationality (such as meeting your current romantic partner at a Less Wrong meetup). So we also included a narrower question related to competence which is less susceptible to these issues.
A simple model of how people are able to become highly competent/productive/successful/impressive individuals is that they try lots of things and keep doing the ones that work. A person’s work habits, the questions they ask during conversations, the methods that they use to make certain kinds of decisions, and many other things can all be developed through a similar iterative process. Over time, someone who has a good process in place of trying things & sticking with the helpful ones will end up collecting a large set of habits/techniques/approaches/principles/etc. which work for them.
The second set of questions which we included on the survey were based on this process, with the aim of measuring about how often people add a new useful technique to their repertoire. There were 3 survey questions based on a streamlined version of this process: first you hear about many different techniques, then you try some fraction of the techniques that you hear about, and then some fraction of the techniques that you try end up working for you and sticking as part of your repertoire. We first asked “On average, about how often do you *read or hear about* another plausible-seeming technique or approach for being more rational / more productive / happier / having better social relationships / having more accurate beliefs / etc.?”, then “...how often do you *try out* another plausible-seeming technique..”, and finally “...how often do you find another technique or approach that *successfully helps you at*...” This final question, about how frequently people acquire a new helpful technique, is our other main outcome measure of practical benefits.
In reality, people often generate their own ideas of techniques to try, and try many variations rather than just a single thing (e.g., many people end up with their own personalized version of the pomodoro technique). Focusing on the streamlined process of hear → try → acquire is a simplification which had two survey-specific benefits. First, having the context of “hearing about a technique and then trying it” was intended to make it clearer what to count as “a technique,” which is important since the outcome measure is a count of the number of techniques acquired. Second, including the “hearing” and “trying” questions allows us to probe this process in a bit more detail by (for example) breaking down the number of new techniques that a person acquired into two components: the number of new techniques that they tried and the hit rate (techniques acquired divided by techniques tried).
One other predictor variable which we included on the survey was a 4-item measure of growth mindset, which was taken from Carol Dweck’s research (sample item: “No matter what kind of person you are, you can always change substantially”).[1] A fixed mindset involves thinking that personal characteristics are fixed and unchangeable - you either have them or you don’t - while a growth mindset involves thinking that personal characteristics can change as a person grows and develops. Dweck and her colleagues have found that growth mindset about a characteristic tends to be associated with more productive behaviors and more improvement over time. For example, children with a growth mindset about being good at thinking tend to seek out intellectual challenges which stretch their abilities, while children with a fixed mindset tend to avoid tasks that they might fail at and seek tasks which they know they can do well.
A blog based on the idea of becoming less wrong sounds like it would reflect growth mindset more than fixed mindset, and many aspects of the local idea cluster seem to match that. Ideas like: there are systematic methods that you can learn which will allow you to form more accurate models of the world. Complex skills can be broken down into simple trainable components. Don't get too attached to a particular image of who you are and what you stand for. Mastering the right cognitive toolkit can make you more effective at accomplishing the things that you care about. Tsuyoku Naritai! In addition to these connections to LW thinking, growth mindset also seems like it could facilitate the process of becoming more successful by trying out various changes to the way that you do things and sticking with the ones that work. Thus, we wanted to investigate whether people who were more involved in the rationality community (in various ways) had more of a growth mindset, and whether people with more of a growth mindset reported more practical benefits.
The other main predictor variables were several different indicators of people's involvement in the rationality community:
LW background
A composite scale, which standardized and then averaged together four questions which all indicate a person’s amount of background with the lesswrong.com website (and which, as I found on previous years' surveys, all correlate with each other and show similar patterns of relationships with other variables). The four questions measured: having read the sequences (ranging from 1 “Never even knew they existed until this moment” to 7 “[Read] All or nearly all of the Sequences”), karma (log-transformed), LW Use (ranging from 1 “I lurk, but never registered an account” to 5 “I've posted in Main”), and length of time in the community (capped at 8 years).
Time per day on Less Wrong
“How long, in approximate number of minutes, do you spend on Less Wrong in the average day?” (log-transformed).
Meetup attendance
“Do you attend Less Wrong meetup?”
“Yes, regularly,” “Yes, once or a few times,” or “No” (categorical variable).
CFAR workshop attendance
Have you ever attended a CFAR workshop?
“Yes, I have been to a full (3+ day) workshop,” “I have been to at least one CFAR class, but not a full (3+ day) workshop,” or “No” (categorical variable).
LW friendships
“Is physical interaction with the Less Wrong community otherwise a part of your everyday life, for example you live with other Less Wrongers, or you are close friends and frequently go out with them?”
“Yes, all the time,” “Yes, sometimes,” or “No” (categorical variable).
LW romantic partner
Have you ever been in a romantic relationship with someone you met through the Less Wrong community?
“Yes,” “I didn't meet them through the community, but they're part of the community now,” or “No” (categorical variable).
I considered combining these four measures of in-person involvement with the LW community (LW meetups, CFAR workshops, LW friendships, and LW romantic partners) into a single scale of in-person LW involvement, but there ended up being a large enough sample size within these groups and strong enough effects for me to analyze them separately.
Respondents also reported their age (which was transformed by taking the square root).
Results
I. Self-reported Benefit
"How much have you benefited from your exposure to and participation in the rationality community (Less Wrong, CFAR, in-person contact with LW/HPMOR readers, etc.)?"
The average response to this question was a 1.4 on a -3 to +3 scale (SD = 1.08), and 15% of people selected the scale maximum “My life is MUCH BETTER than it would have been without exposure to the rationality community.”
Which variables were associated with a larger self-reported benefit from the rationality community?
In short, all of them.
Each of the following variables was significantly related to this self-reported measure of benefit, and in a regression which controlled for the other variables all of them remained significant except for meetup attendance (which became p = 0.07).[2] For ease of interpretation, I have reported the percent of people in each of the following groups who selected the scale maximum. I have sorted the variables in order of effect size, from largest to smallest, based on the results of the regression (see the footnote for more details).
Percent of people in each subgroup answering “My life is MUCH BETTER than it would have been without exposure to the rationality community”
61% LW romantic partner (n = 54)
44% attended a full CFAR workshop (n = 100)
19% age 25 or less (younger people reported more benefit) (n = 724)
50% LW friendships (n = 88)
28% above 3.0 on growth mindset scale (n = 277)
25% high LW background (n = 137)
35% regularly attend meetups (n = 156)
31% acquire a new technique every 3 weeks or more often (n = 213)
18% use LW for 30+ min per day (n = 218)
15% all respondents (n = 1451)
Three noteworthy results:
- Each of the variables related to involvement in the rationality community was associated with reports of getting more benefit from the community.
- The strongest effects came from people who were involved in fairly intensive, in-person activities: finding a romantic partner through LW, attending a full CFAR workshop, and being around other LWers in person all the time.
- Three variables which were not directly related to community involvement – younger age, growth mindset, and acquiring new techniques – were all predictive of self-reported benefit from the rationality community.
One interpretation of these results is that getting involved in the rationality community causes people to acquire useful rationality skills which improve their lives, with larger effects for people who get involved in more depth through close relationships, shared housing, CFAR workshops, etc. However, as noted above, these effects could also be due to non-rationality-related benefits (e.g., finding friends or a romantic partner), a tendency to say nice things about activities & communities that you're a part of, or causal effects in the other direction (e.g., people who benefited the most from the Less Wrong website might be especially likely to attend a CFAR workshop or move into shared housing with other LWers).
It is worth noting that growth mindset and acquiring new techniques were both predictive of larger benefit from the rationality community even though neither variable is directly related to involvement in the community. That makes these effects less open to some of the alternative explanations which could account for the community involvement effects and provides some validation of the self-report measure of benefits, although other causal paths are still a possibility (e.g., people who have changed more since they started reading LW may have come to have more of a growth mindset and also report more benefits).
II. Acquiring New Techniques
"On average, about how often do you find another technique or approach that successfully helps you at being more rational / more productive / happier / having better social relationships / having more accurate beliefs / etc.?"
The average response was a 2.23 (SD = 1.31) on a 1 to 8 scale where 2 is “About once every six months” and 3 is “About once every 2 months.” This can be interpreted more intuitively as acquiring one new technique every 146 days (as a geometric mean).[3]
Which variables were associated with acquiring useful techniques more often?
Only some of them.
LW friendships and CFAR workshop attendance again had significant effects. The other two forms of in-person LW involvement, LW meetups and LW romantic partner, were also predictive of acquiring more techniques, but those effects did not remain significant in a regression controlling for the other variables. Time per day on Less Wrong had a weaker but reliable positive relationship with acquiring new techniques, while LW background had a significant relationship in the opposite direction: people with more LW background acquired fewer techniques. Younger age and growth mindset were again predictive of more benefit.
Based on the results of a regression, here is the number of days per new technique acquired (sorted by effect size, smaller numbers indicate faster technique acquisition).[4] In this list, both the number of days given and the order of the list reflect the results of the regression which controls statistically for the other predictor variables. (* = p < .05, ** = p < .01).
85 days: LW friendships *
87 days: Age (younger) **
95 days: Attended a full CFAR workshop **
114 days: LW romantic partner (p = .21)
118 days: Growth mindset **
174 days: LW background (negative effect) **
131 days: Time per day on Less Wrong **
151 days: Regularly attend meetups (p = .63)
146 days: all respondents
The pattern that was apparent on the self-report measure of benefit from the rationality community – that in-person interactions were more predictive of benefits than online participation – was even stronger on this measure. Attending a CFAR workshop and LW friendships had the largest effects, and these effects seem to be cumulative. People who both attended a full CFAR workshop and interacted with LW friends “all the time” (n = 39) acquired a new technique every 45 days on average, while people who had no in-person interaction with LWers by any of the 4 variables (n = 824) acquired a new technique every 165 days.
Some of the alternative explanations for the effects on self-reported benefit seem less plausible here. For example, it seems less likely that people who have LW friendships would say that they try and acquire more new techniques out of a general tendency to say nice things about communities that you're a part of. Alternative causal paths are still a clear possibility, though. People who tend to try more things may be more likely to go to LW meetups, sign up for CFAR workshops, or move to a city where they can hang out in person with people from their favorite website.
III. The Process of Trying & Acquiring New Techniques
“On average, about how often do you *read or hear about* another plausible-seeming technique or approach for being more rational / more productive / happier / having better social relationships / having more accurate beliefs / etc.?”
“On average, about how often do you *try out* another plausible-seeming technique or approach for being more rational / more productive / happier / having better social relationships / having more accurate beliefs / etc.?”
On average, people heard about a new technique every 12 days and tried a new technique about every 55 days. That means that (at least according to the streamlined model: hear → try → acquire) people tried about 22% of the techniques that they heard about, and added about 36% of the techniques that they tried to their repertoire.[5]
Breaking down acquiring techniques into its two components, techniques tried and hit rate (techniques acquired divided by techniques tried) all of the effects discussed above involving acquiring techniques appear to be due to trying techniques, and not to the hit rate. None of the variables discussed here were predictive of hit rate, and the variables that predicted acquiring techniques were similarly predictive of trying techniques (though in most cases the effect was slightly weaker). In particular, trying techniques predicted self-reported benefit from the rationality community, and people with more LW background tried fewer techniques. People who both attended a full CFAR workshop and interacted with LW friends “all the time” (n = 39) tried a new technique every 13 days, while people who reported no in-person interaction with LWers (n = 849) tried a new technique every 65 days.
These data provide some evidence that, if CFAR workshops, LW friendships, growth mindset, and time on Less Wrong cause people to acquire more techniques, a substantial portion of the effect comes from getting people to try more things (and not just getting them to be more effective at trying the things that they already have been trying).
However, these data do not clearly pin down is different about people's process of trying things. One might expect that hit rate reflects how good a person is at choosing what to try and actually trying it (in a way that makes useful techniques likely to stick), so the lack of effect on hit rate indicates that the difference is just in trying more things. But if someone improved at the process of trying things, becoming more efficient at getting useful-for-them techniques to stick and setting aside the not-useful-for-them techniques, then that might show up primarily as an increase in number of techniques tried (as they cycle through the try things process more rapidly & more frequently). Or, a person who lowers their threshold for what techniques to try might start trying five times as many things and finding twice as many that work for them, which would show up as a drop in their hit rate (they'd also be adding useful techniques to their repertoire twice as fast).[6]
IV. Growth Mindset
Sample item: “You can do things differently, but the important parts of who you are can't really be changed” (reverse-scored).
Growth mindset – seeing important parts of yourself as malleable, and focusing on what you can do to improve – seems like it could be related to the process of benefiting from the rationality community in multiple ways. Here are three:
- People with more of a growth mindset might tend try more things, acquire more useful rationality techniques, get more practical benefits out of the things they do.
- Being involved in the rationality community might cause people to shift towards a growth mindset from a fixed mindset.
- Relatively intensive involvement in the rationality community (such as living in a house with other LWers, or attending a CFAR workshop) might provide a bigger benefit to people with more of a growth mindset.
Item 1 is what we've been looking at in the analysis of acquiring new techniques and self-reported benefit, with growth mindset as one of the predictor variables. The hypothesis is that people who score higher in growth mindset will report more benefit on those measures, and the data support that hypothesis (though these correlational results are also consistent with alternative causal hypotheses).
Item 2 identifies a hypothesis which treats growth mindset as an outcome variable instead of a predictor variable: do people who regularly attend LW meetups have more of a growth mindset? Or those who have more LW background, or who have attended a CFAR workshop, or who have LW friends, etc.? This hypothesis is relatively straightforward to examine with this data set, although the correlational design leaves it an open question whether involvement in the LW community led to a growth mindset or whether having a growth mindset led to people getting more involved in the LW community.
When looking at one variable at a time, each of the measures of in-person involvement in the LW community is significantly predictive of growth mindset. In order of effect size (given in Cohen's d, which counts standard deviations), growth mindset was predicted separately by LW romantic partner (d = 0.42), attending a CFAR workshop (d = 0.21), LW friendships (d = 0.20), and regularly attending meetups (d = 0.15). However, when controlling for the other predictor variables, only having a LW romantic partner remained statistically significant (d = 0.46, p = .03) and attending a CFAR workshop remained marginally significant (d = 0.18, p = .07); LW friendships and meetup attendance became nonsignificant (d < 0.10, p > 0.3).
LW background showed the opposite pattern: it was not related to growth mindset on its own (r = -0.04, p = .13), but it became a highly significant predictor of lower growth mindset when controlling for the other variables related to LW involvement (r = -0.11, p < .01). One plausible causal story that could explain this pattern of correlations is that people who are high in growth mindset who get involved in the website are more likely to also get involved in other in-person ways, while those lower in growth mindset are more likely to just stick with the website. This would lead to the negative relationship LW background and growth mindset when controlling for in-person LW involvement. According to this causal story, growth mindset is a cause of in-person LW involvement rather than a consequence.
Younger age was the strongest predictor of growth mindset, whether controlling for other variables (r = -0.15, p < .01) or not (r = -0.19, p < 0.01), and time per day on Less Wrong was not a significant predictor.
Item 3 from the list predicts an interaction effect between growth mindset and involvement in LW: the benefit of greater involvement in the LW community will be stronger among people high in growth mindset (or, equivalently, the benefit of growth mindset will be stronger among people who are more involved in the LW community). This hypothesis is particularly interesting because this interaction effect seems more plausible under the causal model where LW involvement and growth mindset both cause greater practical benefits than it does under the alternative causal theory that competence or a tendency to try things causes in-person LW involvement.
When predicting self-reported benefit from the rationality there was no sign of these interaction effects, whether looking at the predictor variables one at a time or including them all in a multiple regression. Growth mindset was an equally strong predictor of self-reported benefit for people who are closely involved in the LW community (by each of the various measures) and for people who are less closely involved in the LW community.
When predicting acquiring new techniques, these interaction effects were significant in several cases.[7] A growth mindset was associated more strongly with acquiring among techniques among people who regularly attend LW meetups (p = .003), people who are younger (p = .005), people who have attended a CFAR workshop (p = .04), and (with marginal statistical significance) among people with LW friendships (p = .06). In a multiple regression that included each of these variables, none of these interaction effects was individually statistically significant except the age x growth mindset interaction (presumably because of the various forms of LW involvement were all associated with each other, making it difficult to tease apart their effects).[8]
These results are consistent with the model that the various forms of in-person involvement with the rationality community are especially helpful at producing practical benefits for people who are high in growth mindset.
Conclusion
With this correlational research design there is a limit to how well we can distinguish the hypothesis that LW involvement leads to benefits from other causal stories, but each of the three main variables that we examined were related to in-person LW involvement in ways that were consistent with this hypothesis.
People who have been involved with the in-person LW/CFAR community were especially likely to indicate that their life is better due to the LW community. They tended to report that they tried out and acquired new useful techniques more frequently, especially if they were also high in growth mindset. If spending time with LWers or attending a CFAR workshop leads people to try more rationality-related techniques, find more things that work well for them, and reap the benefits, then these are the results that we would expect to see.
Footnotes
[1] The 4 mindset questions on the survey were taken from Dweck's book Mindset (p. 13). These questions and others like them have been used to measure mindset in many published studies. Many of the questions that have been used focus more narrowly on mindset about intellectual ability, while these four questions deal more broadly with personal qualities.
[2] Unless otherwise noted, all reported effects are significant both in tests with only the single predictor variable and also in tests which controlled for the other predictor variables. A regression was run predicting benefit based on the LW involvement variables and age (growth mindset and acquiring new techniques were not controlled for, since they could be consequences of LW involvement which mediate the benefit). Though all three levels of the categorical variables were included in the regression, the effect size used to order the variables in the list was calculated as the standardized difference in least square means between the highest level of the group (e.g., regularly attend meetups) and the lowest level (e.g., never attend meetups), leaving out intermediate levels (e.g., occasionally attend meetups). To estimate the effect size of continuous variables, the correlation coefficient was translated into an equivalent standardized mean difference by the formula d = 2r/sqrt(1-r2).
[3] The 8 response options were coded as a 1-8 scale, which was used for all analyses. Each scale point indicates a 3-4x multiplier in how often a person acquires new techniques. This 8-point scale can be interpreted as a log scale for the variable "days per technique acquired" (they are associated approximately by the equation 7*3^(5-x)) so a mean on this scale is equivalent to the geometric mean of the number of days. For example, a 3.5 on the 8-point scale translates into 36 days, which is the geometric mean of 21 days (a 4 on the scale) and 63 days (approximately a 3 on the scale).
[4] For categorical variables, the number of days is based on the least squares mean for the highest level of the group (e.g., regularly attend meetups). For continuous variables, it is based on the regression equation predicting the values one standard deviation above the mean of the predictor variable.
[5] On the 8 point scale, “heard about” has mean = 4.48 (SD = 1.62) and “tried” has mean = 3.12 (SD = 1.56). Rate of trying is simply “trying” minus “heard about,” mean = -1.37 (SD = 1.42), and hit rate had scale mean = -0.94 (SD = 0.84). These numbers can also be interpreted as being on a log base 3 scale, so -1 on the hit rate scale corresponds to an actual hit rate of 1/3 (1 technique acquired for every 3 techniques tried).
[6] Trying techniques can be further broken down into two components, hearing about techniques and percentage tried (techniques tried divided by techniques heard about). The data suggest that both are relevant, but they are harder to tease apart with the limited statistical power of this data set.
[7] When looking at a single categorical variable, I only looked at the highest level of the group and the lowest level, leaving out the intermediate level. For example, I tested whether growth mindset was more strongly related to acquiring techniques among people who regularly attend meetups than among people who never attend meetups (leaving out the group that occasionally attends meetups). In the regression including all predictor variables, I included the intermediate level groups (since otherwise it would have been necessary to exclude the data of anyone who was in an intermediate level group on any of the variables).
[8] When I combined the four variables related to in-person involvement into a single composite scale (scoring the highest level of involvement on each variable as a 2 and the lowest level as a 0), the interaction between growth mindset and this in-person involvement scale was statistically significant in a multiple regression predicting techniques acquired (p < .01).
Participation in the LW Community Associated with Less Bias
Summary
CFAR included 5 questions on the 2012 LW Survey which were adapted from the heuristics and biases literature, based on five different cognitive biases or reasoning errors. LWers, on the whole, showed less bias than is typical in the published research (on all 4 questions where this was testable), but did show clear evidence of bias on 2-3 of those 4 questions. Further, those with closer ties to the LW community (e.g., those who had read more of the sequences) showed significantly less bias than those with weaker ties (on 3 out of 4-5 questions where that was testable). These results all held when controlling for measures of intelligence.
METHOD & RESULTS
Being less susceptible to cognitive biases or reasoning errors is one sign of rationality (see the work of Keith Stanovich & his colleagues, for example). You'd hope that a community dedicated to rationality would be less prone to these biases, so I selected 5 cognitive biases and reasoning errors from the heuristics & biases literature to include on the LW survey. There are two possible patterns of results which would point in this direction:
- high scores: LWers show less bias than other populations that have answered these questions (like students at top universities)
- correlation with strength of LW exposure: those who have read the sequences (or have been around LW a long time, have high karma, attend meetups, make posts) score better than those who have not.
The 5 biases were selected in part because they can be tested with everyone answering the same questions; I also preferred biases that haven't been discussed in detail on LW. On some questions there is a definitive wrong answer and on others there is reason to believe that a bias will tend to lead people towards one answer (so that, even though there might be good reasons for a person to choose that answer, in the aggregate it is evidence of bias if more people choose that answer).
This is only one quick, rough survey. If the results are as predicted, that could be because LW makes people more rational, or because LW makes people more familiar with the heuristics & biases literature (including how to avoid falling for the standard tricks used to test for biases), or because the people who are attracted to LW are already unusually rational (or just unusually good at avoiding standard biases). Susceptibility to standard biases is just one angle on rationality. Etc.
Here are the question-by-question results, in brief. The next section contains the exact text of the questions, and more detailed explanations.
Question 1 was a disjunctive reasoning task, which had a definitive correct answer. Only 13% of undergraduates got the answer right in the published paper that I took it from. 46% of LWers got it right, which is much better but still a very high error rate. Accuracy was 58% for those high in LW exposure vs. 31% for those low in LW exposure. So for this question, that's:
1. LWers biased: yes
2. LWers less biased than others: yes
3. Less bias with more LW exposure: yes
Question 2 was a temporal discounting question; in the original paper about half the subjects chose money-now (which reflects a very high discount rate). Only 8% of LWers did; that did not leave much room for differences among LWers (and there was only a weak & nonsignificant trend in the predicted direction). So for this question:
1. LWers biased: not really
2. LWers less biased than others: yes
3. Less bias with more LW exposure: n/a (or no)
Question 3 was about the law of large numbers. Only 22% got it right in Tversky & Kahneman's original paper. 84% of LWers did: 93% of those high in LW exposure, 75% of those low in LW exposure. So:
1. LWers biased: a bit
2. LWers less biased than others: yes
3. Less bias with more LW exposure: yes
Question 4 was based on the decoy effect aka asymmetric dominance aka attraction effect (but missing a control condition). I don't have numbers from the original study (and there is no correct answer) so I can't really answer 1 or 2 for this question, but there was a difference based on LW exposure: 57% vs. 44% selecting the less bias related answer.
1. LWers biased: n/a
2. LWers less biased than others: n/a
3. Less bias with more LW exposure: yes
Question 5 was an anchoring question. The original study found an effect (measured by slope) of 0.55 (though it was less transparent about the randomness of the anchor; transparent studies w. other questions have found effects around 0.3 on average). For LWers there was a significant anchoring effect but it was only 0.14 in magnitude, and it did not vary based on LW exposure (there was a weak & nonsignificant trend in the wrong direction).
1. LWers biased: yes
2. LWers less biased than others: yes
3. Less bias with more LW exposure: no
One thing you might wonder: how much of this is just intelligence? There were several questions on the survey about performance on IQ tests or SATs. Controlling for scores on those tests, all of the results about the effects of LW exposure held up nearly as strongly. Intelligence test scores were also predictive of lower bias, independent of LW exposure, and those two relationships were almost the same in magnitude. If we extrapolate the relationship between IQ scores and the 5 biases to someone with an IQ of 100 (on either of the 2 IQ measures), they are still less biased than the participants in the original study, which suggests that the "LWers less biased than others" effect is not based solely on IQ.
MORE DETAILED RESULTS
There were 5 questions related to strength of membership in the LW community which I standardized and combined into a single composite measure of LW exposure (LW use, sequence reading, time in community, karma, meetup attendance); this was the main predictor variable I used (time per day on LW also seems related, but I found out while analyzing last year's survey that it doesn't hang together with the others or associate the same way with other variables). I analyzed the results using a continuous measure of LW exposure, but to simplify reporting, I'll give the results below by comparing those in the top third on this measure of LW exposure with those in the bottom third.
There were 5 intelligence-related measures which I combined into a single composite measure of Intelligence (SAT out of 2400, SAT out of 1600, ACT, previously-tested IQ, extra credit IQ test); I used this to control for intelligence and to compare the effects of LW exposure with the effects of Intelligence (for the latter, I did a similar split into thirds). Sample sizes: 1101 people answered at least one of the CFAR questions; 1099 of those answered at least one LW exposure question and 835 of those answered at least one of the Intelligence questions. Further details about method available on request.
Here are the results, question by question.
Question 1: Jack is looking at Anne, but Anne is looking at George. Jack is married but George is not. Is a married person looking at an unmarried person?
- Yes
- No
- Cannot be determined
This is a "disjunctive reasoning" question, which means that getting the correct answer requires using "or". That is, it requires considering multiple scenarios. In this case, either Anne is married or Anne is unmarried. If Anne is married then married Anne is looking at unmarried George; if Anne is unmarried then married Jack is looking at unmarried Anne. So the correct answer is "yes". A study by Toplak & Stanovich (2002) of students at a large Canadian university found that only 13% correctly answered "yes" while 86% answered "cannot be determined" (2% answered "no").
On this LW survey, 46% of participants correctly answered "yes"; 54% chose "cannot be determined" (and 0.4% said"no"). Further, correct answers were much more common among those high in LW exposure: 58% of those in the top third of LW exposure answered "yes", vs. only 31% of those in the bottom third. The effect remains nearly as big after controlling for Intelligence (the gap between the top third and the bottom third shrinks from 27% to 24% when Intelligence is included as a covariate). The effect of LW exposure is very close in magnitude to the effect of Intelligence; 60% of those in the top third in Intelligence answered correctly vs. 37% of those in the bottom third.
original study: 13%
weakly-tied LWers: 31%
strongly-tied LWers: 58%
Question 2: Would you prefer to receive $55 today or $75 in 60 days?
This is a temporal discounting question. Preferring $55 today implies an extremely (and, for most people, implausibly) high discount rate, is often indicative of a pattern of discounting that involves preference reversals, and is correlated with other biases. The question was used in a study by Kirby (2009) of undergraduates at Williams College (with a delay of 61 days instead of 60; I took it from a secondary source that said "60" without checking the original), and based on the graph of parameter values in that paper it looks like just under half of participants chose the larger later option of $75 in 61 days.
LW survey participants almost uniformly showed a low discount rate: 92% chose $75 in 61 days. This is near ceiling, which didn't leave much room for differences among LWers. For LW exposure, top third vs. bottom third was 93% vs. 90%, and this relationship was not statistically significant (p=.15); for Intelligence it was 96% vs. 91% and the relationship was statistically significant (p=.007). (EDITED: I originally described the Intelligence result as nonsignificant.)
original study: ~47%
weakly-tied LWers: 90%
strongly-tied LWers: 93%
Question 3: A certain town is served by two hospitals. In the larger hospital, about 45 babies are born each day. In the smaller one, about 15 babies are born each day. Although the overall proportion of girls is about 50%, the actual proportion at either hospital may be greater or less on any day. At the end of a year, which hospital will have the greater number of days on which more than 60% of the babies born were girls?
- The larger hospital
- The smaller hospital
- Neither - the number of these days will be about the same
This is a statistical reasoning question, which requires applying the law of large numbers. In Tversky & Kahneman's (1974) original paper, only 22% of participants correctly chose the smaller hospital; 57% said "about the same" and 22% chose the larger hospital.
On the LW survey, 84% of people correctly chose the smaller hospital; 15% said "about the same" and only 1% chose the larger hospital. Further, this was strongly correlated with strength of LW exposure: 93% of those in the top third answered correctly vs. 75% of those in the bottom third. As with #1, controlling for Intelligence barely changed this gap (shrinking it from 18% to 16%), and the measure of Intelligence produced a similarly sized gap: 90% for the top third vs. 79% for the bottom third.
original study: 22%
weakly-tied LWers: 75%
strongly-tied LWers: 93%
Question 4: Imagine that you are a doctor, and one of your patients suffers from migraine headaches that last about 3 hours and involve intense pain, nausea, dizziness, and hyper-sensitivity to bright lights and loud noises. The patient usually needs to lie quietly in a dark room until the headache passes. This patient has a migraine headache about 100 times each year. You are considering three medications that you could prescribe for this patient. The medications have similar side effects, but differ in effectiveness and cost. The patient has a low income and must pay the cost because her insurance plan does not cover any of these medications. Which medication would you be most likely to recommend?
- Drug A: reduces the number of headaches per year from 100 to 30. It costs $350 per year.
- Drug B: reduces the number of headaches per year from 100 to 50. It costs $100 per year.
- Drug C: reduces the number of headaches per year from 100 to 60. It costs $100 per year.
This question is based on research on the decoy effect (aka "asymmetric dominance" or the "attraction effect"). Drug C is obviously worse than Drug B (it is strictly dominated by it) but it is not obviously worse than Drug A, which tends to make B look more attractive by comparison. This is normally tested by comparing responses to the three-option question with a control group that gets a two-option question (removing option C), but I cut a corner and only included the three-option question. The assumption is that more-biased people would make similar choices to unbiased people in the two-option question, and would be more likely to choose Drug B on the three-option question. The model behind that assumption is that there are various reasons for choosing Drug A and Drug B; the three-option question gives biased people one more reason to choose Drug B but other than that the reasons are the same (on average) for more-biased people and unbiased people (and for the three-option question and the two-option question).
Based on the discussion on the original survey thread, this assumption might not be correct. Cost-benefit reasoning seems to favor Drug A (and those with more LW exposure or higher intelligence might be more likely to run the numbers). Part of the problem is that I didn't update the costs for inflation - the original problem appears to be from 1995 which means that the real price difference was over 1.5 times as big then.
I don't know the results from the original study; I found this particular example online (and edited it heavily for length) with a reference to Chapman & Malik (1995), but after looking for that paper I see that it's listed on Chapman's CV as only a "published abstract".
49% of LWers chose Drug A (the one that is more likely for unbiased reasoners), vs. 50% for Drug B (which benefits from the decoy effect) and 1% for Drug C (the decoy). There was a strong effect of LW exposure: 57% of those in the top third chose Drug A vs. only 44% of those in the bottom third. Again, this gap remained nearly the same when controlling for Intelligence (shrinking from 14% to 13%), and differences in Intelligence were associated with a similarly sized effect: 59% for the top third vs. 44% for the bottom third.
original study: ??
weakly-tied LWers: 44%
strongly-tied LWers: 57%
Question 5: Get a random three digit number (000-999) from http://goo.gl/x45un and enter the number here.
Treat the three digit number that you just wrote down as a length, in feet. Is the height of the tallest redwood tree in the world more or less than the number that you wrote down?
What is your best guess about the height of the tallest redwood tree in the world (in feet)?
This is an anchoring question; if there are anchoring effects then people's responses will be positively correlated with the random number they were given (and a regression analysis can estimate the size of the effect to compare with published results, which used two groups instead of a random number).
Asking a question with the answer in feet was a mistake which generated a great deal of controversy and discussion. Dealing with unfamiliar units could interfere with answers in various ways so the safest approach is to look at only the US respondents; I'll also see if there are interaction effects based on country.
The question is from a paper by Jacowitz & Kahneman (1995), who provided anchors of 180 ft. and 1200 ft. to two groups and found mean estimates of 282 ft. and 844 ft., respectively. One natural way of expressing the strength of an anchoring effect is as a slope (change in estimates divided by change in anchor values), which in this case is 562/1020 = 0.55. However, that study did not explicitly lead participants through the randomization process like the LW survey did. The classic Tversky & Kahneman (1974) anchoring question did use an explicit randomization procedure (spinning a wheel of fortune; though it was actually rigged to create two groups) and found a slope of 0.36. Similarly, several studies by Ariely & colleagues (2003) which used the participant's Social Security number to explicitly randomize the anchor value found slopes averaging about 0.28.
There was a significant anchoring effect among US LWers (n=578), but it was much weaker, with a slope of only 0.14 (p=.0025). That means that getting a random number that is 100 higher led to estimates that were 14 ft. higher, on average. LW exposure did not moderate this effect (p=.88); looking at the pattern of results, if anything the anchoring effect was slightly higher among the top third (slope of 0.17) than among the bottom third (slope of 0.09). Intelligence did not moderate the results either (slope of 0.12 for both the top third and bottom third). It's not relevant to this analysis, but in case you're curious, the median estimate was 350 ft. and the actual answer is 379.3 ft. (115.6 meters).
Among non-US LWers (n=397), the anchoring effect was slightly smaller in magnitude compared with US LWers (slope of 0.08), and not significantly different from the US LWers or from zero.
original study: slope of 0.55 (0.36 and 0.28 in similar studies)
weakly-tied LWers: slope of 0.09
strongly-tied LWers: slope of 0.17
If we break the LW exposure variable down into its 5 components, every one of the five is strongly predictive of lower susceptibility to bias. We can combine the first four CFAR questions into a composite measure of unbiasedness, by taking the percentage of questions on which a person gave the "correct" answer (the answer suggestive of lower bias). Each component of LW exposure is correlated with lower bias on that measure, with r ranging from 0.18 (meetup attendance) to 0.23 (LW use), all p < .0001 (time per day on LW is uncorrelated with unbiasedness, r=0.03, p=.39). For the composite LW exposure variable the correlation is 0.28; another way to express this relationship is that people one standard deviation above average on LW exposure 75% of CFAR questions "correct" while those one standard deviation below average got 61% "correct". Alternatively, focusing on sequence-reading, the accuracy rates were:
75% Nearly all of the Sequences (n = 302)
70% About 75% of the Sequences (n = 186)
67% About 50% of the Sequences (n = 156)
64% About 25% of the Sequences (n = 137)
64% Some, but less than 25% (n = 210)
62% Know they existed, but never looked at them (n = 19)
57% Never even knew they existed until this moment (n = 89)
Another way to summarize is that, on 4 of the 5 questions (all but question 4 on the decoy effect) we can make comparisons to the results of previous research, and in all 4 cases LWers were much less susceptible to the bias or reasoning error. On 1 of the 5 questions (question 2 on temporal discounting) there was a ceiling effect which made it extremely difficult to find differences within LWers; on 3 of the other 4 LWers with a strong connection to the LW community were much less susceptible to the bias or reasoning error than those with weaker ties.
REFERENCES
Ariely, Loewenstein, & Prelec (2003), "Coherent Arbitrariness: Stable demand curves without stable preferences"
Chapman & Malik (1995), "The attraction effect in prescribing decisions and consumer choice"
Jacowitz & Kahneman (1995), "Measures of Anchoring in Estimation Tasks"
Kirby (2009), "One-year temporal stability of delay-discount rates"
Toplak & Stanovich (2002), "The Domain Specificity and Generality of Disjunctive Reasoning: Searching for a Generalizable Critical Thinking Skill"
Tversky & Kahneman's (1974), "Judgment under Uncertainty: Heuristics and Biases"
[Link] Singularity Summit Talks
Videos of the 2012 Singularity Summit talks are now online.
Previous discussion of the Summit here.
Take Part in CFAR Rationality Surveys
Posted By: Dan Keys, CFAR Survey Coordinator
The Center for Applied Rationality is trying to develop better methods for measuring and studying the benefits of rationality. We want to be able to test if this rationality stuff actually works.
One way that the Less Wrong community can help us with this process is by taking part in online surveys, which we can use for a variety of purposes including:
- seeing what rationality techniques people actually use in their day-to-day lives
- developing & testing measures of how rational people are, and seeing if potential rationality measures correlate with the other variables that you'd expect them to
- comparing people who attend a minicamp with others in the LW community, so that we can learn what value-added the minicamps provide beyond what you get elsewhere
- trying out some of the rationality techniques that we are trying to teach, so we can see how they work
We have a couple of surveys ready to go now which cover some of these bullet points, and will be developing other surveys over the coming months.
If you're interested in taking part in online surveys for CFAR, please go here to fill out a brief form with your contact info; then we will contact you about participating in specific surveys.
If you have previously filled out a form like this one to participate in CFAR surveys, then we already have your information so you don't need to sign up again.
Questions/Issues can be posted in the comments here, PMed to me, or emailed to us at CFARsurveys@gmail.com.
Meetup : Chicago games at Harold Washington Library (Sun 6/17)
Discussion article for the meetup : Chicago games at Harold Washington Library (Sun 6/17)
Instead of our typical Saturday Corner Bakery meetup, we'll be meeting this Sunday (6/17) from 2-4pm at the Harold Washington Library.
We've reserved a group study room in the fifth floor. Meet at the fifth floor desk at 1:55 and we will go to the room from there. (getting up there is slightly convoluted. You'll need to take one set of escalators/elevators to the third floor, then go into the third floor and take another set of elevators/escalators to the fifth floor)
What: various rationality games. See below for descriptions.
How (to get there): The library entrance is on State St. between Van Buren and Congress. The red, blue, green, brown, orange, and pink L lines all stop at the library. For those coming from outside the city, it's a 15 minute walk from union station.
Games: Zendo and the Calibration Game
rules from here: http://lesswrong.com/lw/crs/how_to_run_a_successful_less_wrong_meetup/
Zendo
Zendo, also known as “Science, the game,” involves one player picking rules and creating structures that follow that rule. The other players try to discover the rule by building their own structures and asking whether those structures follow the rule. See Wikipedia for the exact rules.
Calibration Game
The Calibration Game requires a large number of numerical trivia questions and their answers. A couple of examples might be “how many lakes are there in Canada” or “which percentage of the world’s countries are landlocked”. The game Wits & Wagers comes with a large number of such trivia questions and answers.
There are several possible variants of the Calibration game:
Personal Calibration. One person reads the question aloud, and everyone writes down their 50% and 90% confidence intervals. For example, if you’re 50% sure that 20% - 40% of the world’s countries are landlocked, write that down as your 50% confidence interval. After ten questions, the correct answers are revealed. People can now check whether half of their guesses in the 50% confidence interval really were right, and whether they really only got one question out of ten in their 90% confi- dence interval wrong. Alternatively, the correct answer may be revealed as soon as everyone has made their guess, rather than waiting until all ten questions are asked.
Single-Round Aumann. Same as Personal Calibration, but after everyone has writ- ten down their initial confidence intervals, they state them aloud. People then have one chance to alter their guesses based on what the others guessed.
Multiple-Round Aumann. Same as Single- Round Aumann, but repeat the “state your guess aloud” part until nobody changes their opinions.
Aumann with Discussion. Same as Multiple- Round Aumann, but people are also allowed to discuss the reasons for their estimates in- stead of just stating them.
Paranoid Debating. Same as Aumann with discussion, but one person is secretly desig- nated as the traitor. The traitor tries to make the group’s guess to be as off-mark as possible. For variants and accounts of this game, see the LW Wiki on Paranoid Debating.
Discussion article for the meetup : Chicago games at Harold Washington Library (Sun 6/17)
Meetup : Weekly Chicago Meetups Resume 5/26
Discussion article for the meetup : Weekly Chicago Meetups Resume 5/26
Because of all of the craziness happening in downtown Chicago, with the NATO meeting and associated protests, the 5/19 Chicago meetup is cancelled.
We will continue with our regularly scheduled weekly meetups the following Saturday, 5/26, 1pm at the Corner Bakery (Michigan & Wacker).
Join the mailing list to stay up-to-date on Chicago meetup plans.
Discussion article for the meetup : Weekly Chicago Meetups Resume 5/26
Meetup : Weekly Chicago Meetups
Discussion article for the meetup : Weekly Chicago Meetups
The Chicago LW meetup group is now meeting every week, Saturdays at 1pm, at the Corner Bakery on the corner of Michigan & Wacker.
Topic for this week's discussion: What have you changed your mind about? Think of some examples of things you have changed your mind about (recently or long ago), and be ready to discuss them. Discussion can flow from there to topics that are raised & the process of changing one's mind.
We're trying to transition from unstructured discussions to more focused meetups. Join the mailing list and share ideas on the google doc to help plan future meetups.
Discussion article for the meetup : Weekly Chicago Meetups
[LINK] Being proven wrong is like winning the lottery
Phil Birnbaum at Sabermetric Research writes about how people have things backwards; it's great to find out that you're wrong:
Let's suppose you open a restaurant, and you're very successful, and people like your food. You're very proud of being a great chef. Then, someone tells you, correctly, that one of your appetizers, one that you think is one of your best, is actually pretty awful. Your customers hate it.
Your first reaction might be to get defensive. But, again, you should be thrilled! Now you can fix that dish. Your food, your restaurant, your profit, and your reputation will all be better than before. It's almost the best thing that can happen to you. Being wrong is like winning the lottery!
...
I guess my overall point is that any online discussion, even between people who violently disagree with each other, should be a co-operative venture. One of you is wrong, and you're working together to find out who. And, we should keep in mind that most of the benefit goes to the person who was actually wrong in the first place.
When someone you respect, or someone who seems to be expert and knowledgeable, starts disagreeing with you, it's like you've stumbled upon a fistful of lottery tickets. Argue your position, yes, but don't get defensive, and keep an open mind. Sure, it might be that other guy who's wrong. But if you're really, really lucky, it'll be you.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)