ikajaste comments on 2013 Survey Results - Less Wrong

74 Post author: Yvain 19 January 2014 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (558)

You are viewing a single comment's thread. Show more comments above.

Comment author: MathieuRoy 22 January 2014 06:22:11PM *  2 points [-]

P(Aliens in observable universe): 74.3 + 32.7 (60, 90, 99) [n = 1496] P(Aliens in Milky Way): 44.9 + 38.2 (5, 40, 85) [n = 1482]

There are (very probably around) 1.7x10^11 galaxies in the observable universe. So I don't understand how can P(Aliens in Milky Way) be so closed to P(Aliens in observable universe)? If P(Aliens in an average galaxy) = 0.0000000001, P(Aliens in observable universe) should be around 1-(1-0.0000000001)^(1.7x10^11)=0.9999999586. I know there are other factors that influence these numbers, but still, even if there's a only a very slight chance for P(Aliens in Milky Way), then P(Aliens in observable universe) should be almost certain. There are possible rational justifications for the results of this survey, but I think (0.95) most people were victim of a cognitive bias. Scope insensitivity maybe? because 1.7*10^11 galaxies is too big to imagine. What do you think?

Tendency to cooperate on the prisoner's dilemma was most highly correlated with items in the general leftist political cluster.

I wonder how many people cooperated only (or in part) because they knew the results would be correlated with their (political) views, and they wanted their "tribe"/community/group/etc. to look good. Maybe next year we could say that this result won't be compared to the other? So if less people cooperate, then it will indicate that maybe some people cooperate for their 'group' to look good. But if these people know that I/we want to compare the results we this year in order to verify this hypothesis, they will continue to cooperate. To avoid most of these, we should compare only the people that will have filled the survey for the first time next year. What do you think?

I ended up deleting 40 answers that suggested there were less than ten million or more than eight billion Europeans, on the grounds that people probably weren't really that far off so it was probably some kind of data entry error, and correcting everyone who entered a reasonable answer in individuals to answer in millions as the question asked.

I think you shouldn't have corrected anything. When I assign a probability to the correctness of my answer, I included a percentage for having misread the question or made a data entry error.

This year's results suggest that was no fluke and that we haven't even learned to overcome the one bias that we can measure super-well and which is most easily trained away. Disappointment!

Would some people be interested in answering 10 such questions and give their confidence about their answer every month? That would provide better statistics and a way to see if we're improving.

Comment author: ikajaste 27 January 2014 08:44:58AM 1 point [-]

I wonder how many people cooperated only (or in part) because they knew the results would be correlated with their (political) views, and they wanted their "tribe"/community/group/etc. to look good.

I don't think the responses of people here would be so much affected by directly wanting to present their own social group as good. However (false) correlation between those two could happen just because of framing by other questions.

E.g. the answer to prisoner's dilemma question might be affected by whether you've just answered "I'm associated with the political left" or whether you've just answered "I consider rational calculations to be the best way to solve issues".

If that is the effect causing a false correlation, then adding the statment "these won't be correlated" woudn't do any good - in fact, it would only serve as a further activation for the person to enter the political-association frame.

This is a common problem with surveys that isn't very easy to mitigate. Individually randomizing question order and analyzing differences in correlations based on presented question order helps a bit, but the problem still remains, and the sample size for any such difference-in-correlation analysis becomes increasingly small.