This is the kind of thing which makes me wonder about a community norm of taking psychological research (which may be badly designed or prove less than it seems to) very seriously.
It's definitely a good idea to be skeptical. There is definitely some badly-designed research out there, and some that shows less than it claims to. The best way to deal with that is to read the original papers and make sure the studies were adequately performed, although this doesn't entirely solve the issue (see: publication bias).
It would be really nice if studies had a sort of thoroughness check list at the top of the paper next to the abstract clearly stating sample size, sampling process, number of peer reviewers, study methodology(double-blind, panel etc), and any other relevant information to the papers validity. If some sort of crude standardization could even occur within specific fields it would make cross-study comparison much easier. Or what if papers could be published online in a format inviting public criticism and community concerns would be forced to be answered to by authors.
This already happens in some cases. PLoS One, for example, publishes open-access entirely online and invites community criticism:
http://www.plosone.org/static/information.action
(Sorry, I've yet to figure out how to link things and suchlike; can HTML be used here?)
One issue with just allowing anyone to comment on a paper though is a high proportion of misinformed or ignorant people who can hijack the discussion. LW gets round this very well with its judicious gardening, and other sites do this too, so perhaps it's not as big an issue as I'm making it out to be. Unmoderated comment forums tend to turn into slimepits though.
The best way to deal with that is to read the original papers and make sure the studies were adequately performed
And even just reading the abstracts is already a huge step forward for epistemic hygiene because science reporting and journalism can be so damn shoddy (besides, I regularly find that the abstracts are easier to read and understand than their popularizations).
I generally agree. I have an aversion to just reading abstracts because it doesn't let you get at the nitty-gritty of how exactly the studies were performed, but it's way better than just reading the news reports - and not everyone has full-text access to studies anyway.
The Economist recently had an article about how sitting in wobbly furniture makes people crave "emotional stability." They also mention a study finding that people sitting in chairs that lean to the left reported more liberal opinions.
http://www.economist.com/node/21558553
The difference is not huge, but it is statistically significant. Even a small amount of environmental wobbliness seems to promote a desire for an emotional rock to cling to.
As far as I can tell they are completely serious.
Hypothesis: perhaps "money" is not sufficient reward to trigger hyperbolic discounting. (The reason behind this could be that you have to buy the actual rewarding thing with your money. Money itself feels kind of abstract.) Most of the stuff I've read about hyperbolic discounting talk about it in the context of dopaminergic rewards.
Please do not quote entire articles, it makes lesswrong be ranked lower on search engines. A link and a short summary would be much better.
Is search engine rank really more important than actually having content users value?
I personally prefer seeing a few key extracts, with the interesting parts bolded, and a link if I want to read the whole thing (I agree that just copying the whole article is not that useful, but I don't think rank on google should weight much in that decision).
The opening paragraph is a poor choice, it gives no more information than the title. Here's a suggested summary:
Two studies by Daniel Read at Warwick Business School found no evidence of hyperbolic discounting. Both studies offered participants a choice of two rewards some weeks later with the greater reward one week later than the lesser, then offered the choice of the same rewards at the same time points after a few weeks. For greater realism, subjects in first study (Nstudent = 128) believed they had a chance of getting the reward, and subjects in the second study (N = 201) believed they had either a chance or a certainty of getting it. In both studies, patience was equally likely to increase or decrease over time, whereas hyperbolic discounting predicts a decrease.
Why not quote the abstract of the original paper? It is, after all, the authors' best attempt at accurately summarizing their results and hooking the reader:
Hyperbolic discounting of delayed rewards has been proposed as an underlying cause of the failure to stick to plans to forego one's immediate desires, such as the plan to diet, wake up early, or quit taking heroin. We conducted two tests of inconsistent planning in which respondents made at least two choices between a smaller–sooner (SS) and larger–later (LL) amount of money, one several weeks before SS would be received, and one immediately before. Hyperbolic discounting predicts that there would be more choices of SS as it became more proximate—and, equivalently, that among those who change their mind, “impatient shifts” (LL-to-SS) will be more common than “patient shifts” (SS-to-LL). We find no evidence for this, however, and in our studies shifts in both directions were equally likely. We propose that some of the evidence cited on behalf of hyperbolic discounting can be attributed to qualitatively different psychological mechanisms.
http://bps-research-digest.blogspot.com/2012/07/prepared-to-wait-new-research.html