Comment author: lahwran 14 December 2015 08:43:05PM *  0 points [-]

I'd be very interested in poking this dataset. Will the raw data be published for the dimensions analyzed here?

(If not, why do you hate science and the future of humanity? wait, drat, mind tricks only work on the weak-minded.)

Comment author: Unnamed 16 December 2015 03:15:36PM 6 points [-]

why do you hate science and the future of humanity

Because we promised to respect the participants' privacy. That includes (e.g.) not posting their income on the internet alongside other information that might be used to identify them.

Our current plan is to share the data with a few stats folks who also agree to protect their privacy. I've exchanged emails with Ilya about this, and we're looking for others.

Comment author: jkaufman 14 December 2015 08:11:25PM *  4 points [-]

Instead, you select from a population which is as similar as possible to the treatment group

They did this with an earlier batch (I was part of that control group) and they haven't reported that data. I found this disappointing, and it makes me trust this round of data less.

On Sunday, Sep 8, 2013 Dan at CFAR wrote:

Last year, you took part in the first round of the Center for Applied Rationality's study on the benefits of learning rationality skills. As we explained then, there are two stages to the survey process: first an initial set of surveys in summer/fall 2012 (an online Rationality Survey for you to fill out about yourself, and a Friend Survey for your friends to fill out about you), and then a followup set of surveys one year later in 2013 when you (and your friends) would complete the surveys again so that we could see what has changed.

Comment author: Unnamed 16 December 2015 03:14:01PM 5 points [-]

You're right, we should've posted the results on our previous study. I'll put those numbers together in a comprehensible format and then I'll have them posted soon.

The brief explanation of why we didn't take the time to write them up earlier is that the study was underpowered and we thought that the results weren't that informative. In retrospect, that decision was a mistake.

I've put a list of the workshop surveys that we've done in a separate comment.

Comment author: Unnamed 16 December 2015 03:12:31PM 5 points [-]

Here's a summary of all the (pre- vs. post-) workshop surveys that we've done.

With our summer 2012 workshops, we ran a small RCT. We randomized admissions for some people, and then surveyed the experimental & control groups before the workshops and again about one year later. We also had them ask their friends to fill out a peer survey about them. Additionally, they did a pre-workshop interview about intellectual topics which was coded for epistemic rationality, but because that measure had low reliability (and was labor-intensive) we did not collect post-workshop data on it and we cut that from future surveys.

We supplemented that summer 2012 experimental study with a nonrandomized treatment group (everyone else who attended a summer 2012 workshop and agreed to take the surveys) and a nonrandomized comparison group (which we recruited from Less Wrong in 2012). We should have shared these results (and the RCT results) once we had them; I'm currently working on putting that information together and will share once it's ready (we should have shared it earlier).

With the October 2013 and November 2013 workshop cohorts, we collected pre-workshop data from workshop participants and their friends using version 1.1 of the workshop survey. We made substantial changes to the survey after these workshops, so we basically ended up treating these as pilot tests for the next round of surveys. We did not collect post-workshop data from the November 2013 cohort.

With the February 2014 through April 2015 workshop cohorts, we collected the data analyzed in this post (using version 2.0 of the workshop survey). We also collected data from their friends (which we plan to post about within the next few weeks, once it is all collected and analyzed).

With the June 2015 and November 2015 workshop cohorts, we collected the same pre-workshop data as with the Feb 2014 - Apr 2015 workshops. June 2015 post-workshop data is currently being collected, and we plan on sharing the updated numbers once their data has all come in.

Comment author: IlyaShpitser 12 December 2015 09:56:23PM 16 points [-]

Hi there. I want to help you with this dataset. Send me an email some time.

Comment author: Unnamed 14 December 2015 06:49:53AM 4 points [-]

We want help. Email sent.

Comment author: AstraSequi 14 December 2015 02:13:28AM *  1 point [-]

People with an interest in CFAR would probably work. It would account for possibilities like the population being drawn from people interested in self-improvement, since they could get that in other places.

I can't say how much confidence I'd have without seeing the data. The evidence for whether it's a good control mainly comes from checking the differences between groups at baseline. This isn't the same as whether the controls changed, which is a common pitfall. Even if the treatment group changes significantly and the control doesn't, it doesn’t mean the difference between treatment and control is significant.

Also, to clarify, the comparison at baseline isn’t limited to the outcome variables. It should include all the data on potential confounders, including things like age and gender. This is all presented in Table 1 in most studies of cause and effect in populations. A few differences don't invalidate the study, but they should be accounted for in the analysis.

RE terminology: Agreed it works as a shorthand and the methodology has enough detail to tell us what was done. It just seems unusual to use it as a complete formal description.

Another question: could you explain more of what you did about potential confounders? Using age as an example, you only wrote about testing for significant correlations. This doesn't rule out age as a confounder, so did you do anything else that you didn't include?

Comment author: Unnamed 14 December 2015 06:49:10AM 0 points [-]

Could you give an example of an additional analysis that you think should be run?

If the study included a comparison group which differed on some demographic variables (like gender), then I understand the value of running analyses that control for those variables (e.g., did the treatment group have a larger increase in conscientiousness than the comparison group while controlling for gender?). But that wasn't the study design, so we can't just run a regression with demographic controls.

Comment author: AstraSequi 12 December 2015 09:28:14PM *  7 points [-]

The primary weakness of longitudinal studies, compared with studies that include a control group

Longitudinal studies can and should include control groups. The difference with RCTs is that the control group is not randomized. Instead, you select from a population which is as similar as possible to the treatment group, so an example is a group of people who were interested but couldn't attend because of scheduling conflicts. There is also the option of a placebo substitute like sending them generic self-help tips.

ETA: "Longitudinal" is also ambiguous here. It means that data were collected over time, and could mean one of several study types (RCTs are also longitudinal, by some definitions). I think you want to call this a cohort study, except without controls this is more like two different cross-sectional studies from the same population.

Comment author: Unnamed 13 December 2015 11:18:24PM 1 point [-]

We looked into the possibility of including a nonrandomized comparison group. In order to get a large enough sample size, we'd have to be much less selective than your example (people who were accepted to a workshop but weren't able to attend for several months). One option that we considered was surveying Less Wrongers. Another option was to ask for volunteers from the people who had shown an interest in CFAR (e.g., people who have subscribed to the CFAR newsletter, people who have applied to workshops and been turned down). We decided not to use either of those comparison groups in this study, but we might use them in future research.

Would you have much more confidence in these results if we had included one of those groups as a comparison, and found that they showed little or no change on these variables?

(RE terminology: studies with this design are often just called "longitudinal." Hopefully the methodology section clears up any ambiguity, and the opening of the post also points readers' thoughts in the right direction.)

Comment author: Kaj_Sotala 12 December 2015 06:02:25PM 2 points [-]

Neat!

You only seem to mention self-reports. What about the part of the pre- and postsurveys where you had the workshop participant's friends rate them?

Comment author: Unnamed 12 December 2015 09:31:51PM 2 points [-]

We are still finishing up data collection on those (we don't ask the friend to fill out the post-survey until after the participant has filled it out, which means that it takes a few extra weeks to get all the friend data). I'll start the data analysis on those within the next week or so.

Results of a One-Year Longitudinal Study of CFAR Alumni

33 Unnamed 12 December 2015 04:39AM

By Dan from CFAR

Introduction

When someone comes to a CFAR workshop, and then goes back home, what is different for them one year later? What changes are there to their life, to how they think, to how they act?

CFAR would like to have an answer to this question (as would many other people). One method that we have been using to gather relevant data is a longitudinal study, comparing participants' survey responses from shortly before their workshop with their survey responses approximately one year later. This post summarizes what we have learned thus far, based on data from 135 people who attended workshops from February 2014 to April 2015 and completed both surveys.

The survey questions can be loosely categorized into four broad areas:

  1. Well-being: On the whole, is the participant's life going better than it was before the workshop?
  2. Personality: Have there been changes on personality dimensions which seem likely to be associated with increased rationality?
  3. Behaviors: Have there been increases in rationality-related skills, habits, or other behavioral tendencies?
  4. Productivity: Is the participant working more effectively at their job or other projects?

We chose to measure these four areas because they represent part of what CFAR hopes that its workshops accomplish, they are areas where many workshop participants would like to see changes, and they are relatively tractable to measure on a survey. There are other areas where CFAR would like to have an effect, including people's epistemics and their impact on the world, which were not a focus of this study.

We relied heavily on existing measures which have been validated and used by psychology researchers, especially in the areas of well-being and personality. These measures typically are not a perfect match for what we care about, but we expected them to be sufficiently correlated with what we care about for them to be worth using.

We found significant increases in variables in all 4 areas. A partial summary:

Well-being: increases in happiness and life satisfaction, especially in the work domain (but no significant change in life satisfaction in the social domain)

Personality: increases in general self-efficacy, emotional stability, conscientiousness, and extraversion (but no significant change in growth mindset or openness to experience)

Behaviors: increased rate of acquisition of useful techniques, emotions experienced as more helpful & less of a hindrance (but no significant change on measures of cognitive biases or useful conversations)

Productivity: increases in motivation while working and effective approaches to pursuing projects (but no significant change in income or number of hours worked)

The rest of this post is organized into three main sections. The first section describes our methodology in more detail, including the reasoning behind the longitudinal design and some information on the sample. The second section gives the results of the research, including the variables that showed an effect and the ones that did not; the results are summarized in a table at the end of that section. The third section discusses four major methodological concerns—the use of self-report measures (where respondents might just give the answer that sounds good), attrition (some people who took the pre-survey did not complete the post-survey), other sources of personal growth (people might have improved over time without attending the CFAR workshop), and regression to the mean (people may have changed after the workshop simply because they came to the workshop at an unusually high or low point)—and attempts to evaluate the extent to which these four issues may have influenced the results.

continue reading »
In response to comment by [deleted] on Rationality Quotes Thread December 2015
Comment author: Sarunas 05 December 2015 12:37:08PM 1 point [-]

I remember reading the idea expressed in this quote in an old LW post, older than Haidt's book which was published in 2012, and it is probably older than that.

In any case, I think that this is a very good quote, because it highlights a bias that seems to be more prevalent than perhaps any other cognitive bias discussed here and motivates attempts to find better ways to reason and argue. If LessWrong had an introduction whose intention was to motivate why we need better thinking tools, this idea could be presented very early, maybe even in a second or third paragraph.

Comment author: Unnamed 05 December 2015 08:28:33PM 9 points [-]

I think psychologist Tom Gilovich is the original source of the "Can I?" vs. "Must I?" description of motivated reasoning. He wrote about it in his 1991 book How We Know What Isn't So.

For desired conclusions, we ask ourselves, "Can I believe this?", but for unpalatable conclusions we ask, "Must I believe this?

Comment author: Starglow 12 November 2015 05:47:04AM *  0 points [-]

Hi! This may seem a bit off topic, but I would really appreciate it if someone could answer my question. A few months ago, I found and played a nifty little game that asked you to make guesses about statistics and set an interval of confidence, is mostly about updating probabilities based on new information and that ultimately requires you to collect information to decide whether a certain savant (philosopher or mathematician, I don't remember) is more likely in his cave or at the pub. I've been wanting to have another look at it, but I have been entirely unable to find it again.

Could anyone point me to it? I'm fairly certain it was somewhere around here. Thanks for the help!

Comment author: Unnamed 12 November 2015 06:24:44AM 2 points [-]

View more: Prev | Next