gwern comments on [LINK] Get paid to train your rationality - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (55)
Apparently the only way to know is to try. It seems likely that there is such a restriction. I'd estimate a better than 70% chance that I get turned down. :)
I got an email an hour ago from the study saying I was accepted and taking me to the initial survey (a long one, covering calibration on geopolitics, finance, and religion; personality surveys with a lot of fox/hedgehog questions; basic probability; a critical thinking test, the CRT; and then what looked like a full matrix IQ test). The message at the end of all the questions:
So I'm marking me as accepted, anyway.
And the "tournament" is now begun. Just got email with login instructions.
Looks somewhat similar to PredictionBook, actually. :)
I did all my predictions last night immediately after the email showed up, so that meant I got to place a lot of bets at 50/50 odds :)
(Then I recorded everything privately in PredictionBook. No point in leaving my predictions trapped on their site.)
Interface-wise, I don't like it at all. I'm still not sure what exactly I am betting at or with, compared to PB with straight probabilities or Intrade with share prices.
Did you take the "training refresher"? That includes a general-knowledge test at the end which scores you on both calibration and resolution. My results were pretty poor (but not abysmal):
I'd be curious to compare with yours if you'd care to share.
Without actually going through the whole refresher, it seems to be the same; when I did the training, I don't remember that calibration/resolution test. Perhaps that is one of the experimental differences.
I didn't remember that test from earlier, either. Worth checking out? I don't mind accidentally unblinding a little if it is an experimental/control difference - curious folks will be curious.
I just went through the whole thing again; there was no test of that kind at the end. (What there was was the previous multiple-choice quiz about some example forecasts and how they went wrong.) Looks like this is an experimental/control difference. I'd rather not discuss that bit further - this isn't about possibly life-or-death drugs, after all, and I already know where I can find calibration tests like that.
Fine with me. :)
BTW, look what I found. Did you know about this one?
Looks like someone is being very naughty. I've asked him on Twitter.
Have you entered any comments on your predictions at the GJ site? (You're supposed to enter a minimum number of comments over one year, and also a minimum number of responses to others' comments. My understanding is that this will in time be run as a team game, with team play conventions.)
From my first experiences, I'm assuming the scoring will be pretty much as with PB.com - based on probability. Their model seems to be calibration/resolution rather than the visual "slope" representation.
Comments? I don't see any relevant fields for that, checking right now, nor does my 'About' include the substring "comment". Another experimental difference, I guess...
The "Why did you answer the way you did" field. I've been assuming we're both using the same underlying app, i.e. Crowdcast. But perhaps we're not...