Comment author: lahwran 14 December 2015 08:43:05PM *  0 points [-]

I'd be very interested in poking this dataset. Will the raw data be published for the dimensions analyzed here?

(If not, why do you hate science and the future of humanity? wait, drat, mind tricks only work on the weak-minded.)

Comment author: Unnamed 16 December 2015 03:15:36PM 6 points [-]

why do you hate science and the future of humanity

Because we promised to respect the participants' privacy. That includes (e.g.) not posting their income on the internet alongside other information that might be used to identify them.

Our current plan is to share the data with a few stats folks who also agree to protect their privacy. I've exchanged emails with Ilya about this, and we're looking for others.

Comment author: jkaufman 14 December 2015 08:11:25PM *  4 points [-]

Instead, you select from a population which is as similar as possible to the treatment group

They did this with an earlier batch (I was part of that control group) and they haven't reported that data. I found this disappointing, and it makes me trust this round of data less.

On Sunday, Sep 8, 2013 Dan at CFAR wrote:

Last year, you took part in the first round of the Center for Applied Rationality's study on the benefits of learning rationality skills. As we explained then, there are two stages to the survey process: first an initial set of surveys in summer/fall 2012 (an online Rationality Survey for you to fill out about yourself, and a Friend Survey for your friends to fill out about you), and then a followup set of surveys one year later in 2013 when you (and your friends) would complete the surveys again so that we could see what has changed.

Comment author: Unnamed 16 December 2015 03:14:01PM 5 points [-]

You're right, we should've posted the results on our previous study. I'll put those numbers together in a comprehensible format and then I'll have them posted soon.

The brief explanation of why we didn't take the time to write them up earlier is that the study was underpowered and we thought that the results weren't that informative. In retrospect, that decision was a mistake.

I've put a list of the workshop surveys that we've done in a separate comment.

Comment author: Unnamed 16 December 2015 03:12:31PM 5 points [-]

Here's a summary of all the (pre- vs. post-) workshop surveys that we've done.

With our summer 2012 workshops, we ran a small RCT. We randomized admissions for some people, and then surveyed the experimental & control groups before the workshops and again about one year later. We also had them ask their friends to fill out a peer survey about them. Additionally, they did a pre-workshop interview about intellectual topics which was coded for epistemic rationality, but because that measure had low reliability (and was labor-intensive) we did not collect post-workshop data on it and we cut that from future surveys.

We supplemented that summer 2012 experimental study with a nonrandomized treatment group (everyone else who attended a summer 2012 workshop and agreed to take the surveys) and a nonrandomized comparison group (which we recruited from Less Wrong in 2012). We should have shared these results (and the RCT results) once we had them; I'm currently working on putting that information together and will share once it's ready (we should have shared it earlier).

With the October 2013 and November 2013 workshop cohorts, we collected pre-workshop data from workshop participants and their friends using version 1.1 of the workshop survey. We made substantial changes to the survey after these workshops, so we basically ended up treating these as pilot tests for the next round of surveys. We did not collect post-workshop data from the November 2013 cohort.

With the February 2014 through April 2015 workshop cohorts, we collected the data analyzed in this post (using version 2.0 of the workshop survey). We also collected data from their friends (which we plan to post about within the next few weeks, once it is all collected and analyzed).

With the June 2015 and November 2015 workshop cohorts, we collected the same pre-workshop data as with the Feb 2014 - Apr 2015 workshops. June 2015 post-workshop data is currently being collected, and we plan on sharing the updated numbers once their data has all come in.

Comment author: IlyaShpitser 12 December 2015 09:56:23PM 16 points [-]

Hi there. I want to help you with this dataset. Send me an email some time.

Comment author: Unnamed 14 December 2015 06:49:53AM 4 points [-]

We want help. Email sent.

Comment author: AstraSequi 14 December 2015 02:13:28AM *  1 point [-]

People with an interest in CFAR would probably work. It would account for possibilities like the population being drawn from people interested in self-improvement, since they could get that in other places.

I can't say how much confidence I'd have without seeing the data. The evidence for whether it's a good control mainly comes from checking the differences between groups at baseline. This isn't the same as whether the controls changed, which is a common pitfall. Even if the treatment group changes significantly and the control doesn't, it doesn’t mean the difference between treatment and control is significant.

Also, to clarify, the comparison at baseline isn’t limited to the outcome variables. It should include all the data on potential confounders, including things like age and gender. This is all presented in Table 1 in most studies of cause and effect in populations. A few differences don't invalidate the study, but they should be accounted for in the analysis.

RE terminology: Agreed it works as a shorthand and the methodology has enough detail to tell us what was done. It just seems unusual to use it as a complete formal description.

Another question: could you explain more of what you did about potential confounders? Using age as an example, you only wrote about testing for significant correlations. This doesn't rule out age as a confounder, so did you do anything else that you didn't include?

Comment author: Unnamed 14 December 2015 06:49:10AM 0 points [-]

Could you give an example of an additional analysis that you think should be run?

If the study included a comparison group which differed on some demographic variables (like gender), then I understand the value of running analyses that control for those variables (e.g., did the treatment group have a larger increase in conscientiousness than the comparison group while controlling for gender?). But that wasn't the study design, so we can't just run a regression with demographic controls.

Comment author: AstraSequi 12 December 2015 09:28:14PM *  7 points [-]

The primary weakness of longitudinal studies, compared with studies that include a control group

Longitudinal studies can and should include control groups. The difference with RCTs is that the control group is not randomized. Instead, you select from a population which is as similar as possible to the treatment group, so an example is a group of people who were interested but couldn't attend because of scheduling conflicts. There is also the option of a placebo substitute like sending them generic self-help tips.

ETA: "Longitudinal" is also ambiguous here. It means that data were collected over time, and could mean one of several study types (RCTs are also longitudinal, by some definitions). I think you want to call this a cohort study, except without controls this is more like two different cross-sectional studies from the same population.

Comment author: Unnamed 13 December 2015 11:18:24PM 1 point [-]

We looked into the possibility of including a nonrandomized comparison group. In order to get a large enough sample size, we'd have to be much less selective than your example (people who were accepted to a workshop but weren't able to attend for several months). One option that we considered was surveying Less Wrongers. Another option was to ask for volunteers from the people who had shown an interest in CFAR (e.g., people who have subscribed to the CFAR newsletter, people who have applied to workshops and been turned down). We decided not to use either of those comparison groups in this study, but we might use them in future research.

Would you have much more confidence in these results if we had included one of those groups as a comparison, and found that they showed little or no change on these variables?

(RE terminology: studies with this design are often just called "longitudinal." Hopefully the methodology section clears up any ambiguity, and the opening of the post also points readers' thoughts in the right direction.)

Comment author: Kaj_Sotala 12 December 2015 06:02:25PM 2 points [-]

Neat!

You only seem to mention self-reports. What about the part of the pre- and postsurveys where you had the workshop participant's friends rate them?

Comment author: Unnamed 12 December 2015 09:31:51PM 2 points [-]

We are still finishing up data collection on those (we don't ask the friend to fill out the post-survey until after the participant has filled it out, which means that it takes a few extra weeks to get all the friend data). I'll start the data analysis on those within the next week or so.

In response to comment by [deleted] on Rationality Quotes Thread December 2015
Comment author: Sarunas 05 December 2015 12:37:08PM 1 point [-]

I remember reading the idea expressed in this quote in an old LW post, older than Haidt's book which was published in 2012, and it is probably older than that.

In any case, I think that this is a very good quote, because it highlights a bias that seems to be more prevalent than perhaps any other cognitive bias discussed here and motivates attempts to find better ways to reason and argue. If LessWrong had an introduction whose intention was to motivate why we need better thinking tools, this idea could be presented very early, maybe even in a second or third paragraph.

Comment author: Unnamed 05 December 2015 08:28:33PM 9 points [-]

I think psychologist Tom Gilovich is the original source of the "Can I?" vs. "Must I?" description of motivated reasoning. He wrote about it in his 1991 book How We Know What Isn't So.

For desired conclusions, we ask ourselves, "Can I believe this?", but for unpalatable conclusions we ask, "Must I believe this?

Comment author: Starglow 12 November 2015 05:47:04AM *  0 points [-]

Hi! This may seem a bit off topic, but I would really appreciate it if someone could answer my question. A few months ago, I found and played a nifty little game that asked you to make guesses about statistics and set an interval of confidence, is mostly about updating probabilities based on new information and that ultimately requires you to collect information to decide whether a certain savant (philosopher or mathematician, I don't remember) is more likely in his cave or at the pub. I've been wanting to have another look at it, but I have been entirely unable to find it again.

Could anyone point me to it? I'm fairly certain it was somewhere around here. Thanks for the help!

Comment author: Unnamed 12 November 2015 06:24:44AM 2 points [-]
Comment author: philh 13 October 2015 08:05:50PM 12 points [-]

I have an intuition that if we implemented universal basic income, the prices of necessities would rise to the point where people without other sources of income would still be in poverty. I assume there are UBI supporters who've spent more time thinking about that question than I have, and I'm interested in their responses.

(I have some thoughts myself on the general directions responses might take, but I haven't fleshed them out, and I might not care enough to do so.)

Comment author: Unnamed 16 October 2015 07:59:50PM 7 points [-]

If you want information on how increased income due to UBI would affect people's spending on food, you can look at the data that we already have on the relationship between income and spending on food. Three stylized facts:

As income goes up, the proportion of income spent on food goes down.

As income goes up, the total amount of money spent on food goes up.

As income goes up, the proportion of one's food budget spent on restaurants goes up.

These trends generally hold if you are comparing different countries with each other, or if you are comparing different people within a single country, or if you are looking at a single country over time as it gets richer. I don't see any strong reasons to think that they wouldn't also apply to people whose income went up due to receiving a new UBI.

So if a household was making $20,000 per year and spending 20% of it ($4,000) on food, and UBI increases their income to $25,000 per year, then we can predict that they will spend somewhere between $4,000 and $5,000 per year on food, and some of the increased spending will go towards increased quality & convenience (such as eating out). You could probably make more precise predictions if you tried to put numbers on the three stylized facts.

More generally, the model here is: UBI affects the distribution of 'income after taxes & transfers', and the distribution of 'income after taxes & transfers' affects other things like prices & spending habits. So if you want to predict how UBI will affect something like prices, then study how 'income after taxes & transfers' affects prices, and combine that with your estimate of how the UBI will affect the distribution of 'income after taxes & transfers'.

View more: Prev | Next