Is it just me, or are the betting rules on Bets of Bitcoin, er, incomprehensible? It takes the worst form of betting - parimutuel betting in horse racing - in common use in America, and adds a bunch of arbitrary time based rules to adjust how bitcoins are spread among the winners. In order to actually make a bet there, I would have to estimate:
I'm fairly certain this is the worst betting scheme I've ever seen, and I'm somewhat suspicious of it given the general sketchiness of the bitcoin community (partially offset by the revealed incompetence of the bitcoin community). One notable casualty of this system is the ability to convert their betting information into informational probability estimates of the event occurring.
I'm not competent to comment on the 'revealed incompetence of the Bitcoin community', but for the benefit of those who aren't aware of those issues, it would useful if you could either summarize that revelation or post a link to such a summary.
https://bitcointalk.org/index.php?topic=83794.0;all is a reasonably good start at listing the various scams and frauds and hacks.
See also http://polimedia.us/trilema/2012/the-bitcoin-drama-timeline/
No, it's not just you - I was really excited when I heard of it since I had been forced to stop using Intrade due to the fees (and now Intrade is offlimits period), but as I read through the FAQ and the rules, my reaction was, as I started: ლ(╹◡╹ლ) As the rules began to dawn: ⊙o⊙ And finally: ╰(‵皿′*)╯︵ ┻━┻
But of course it's still a source of predictions even if you'd have to be insane to use it.
Why do you say that parimutuel betting is the worst form? What is the dimension along which it is bad?
Consider that a rhetorical flourish rather than an academic fact.
In blackjack, sports betting, roulette, and slots, you only face uncertainty in the outcome of the event. In parimutuel betting you face uncertainty in the outcome of the event, and also due to the movement of the odds based on the opinions of others after your bet is placed. Lotteries also exhibit this since a greater number of lottery players increases the size of the lottery, but also increases the probability that multiple people will select the same set of numbers. Poker probably has some characteristics of this too, but trying to force an interactive, strategic game with random elements into this outcome betting paradigm probably stretches the comparison to the point of breaking.
The dimension it's bad along is "fun for players who want to be able to make bets based on their opinions of the outcomes of events." Of course, blackjack, roulette, and slots may not be very fun for this player either, but sports betting is probably more fun than horse racing.
Well, in principle I see what you are talking about - if a lot of people bet your side, then your payoff can be reduced, perhaps to the point where you wouldn't have taken the bet. But, at least for reasonably well-subscribed bets, won't you, in the long run, do well enough by equating the ratio of future bets to the ratio of current bets? In other words, the odds should be as likely to move in your favour as against you.
That's basically true for horse racing because all the relevant information is known in the minutes before the race and you can make your final bet. For a prediction market you would actually expect that if you have an informational advantage, the odds will move closer to your estimate over time as that information is revealed or processed by other market participants. The time frame on Bets of Bitcoin is long enough that genuine information will be revealed in the meantime for many bets. Because later bets aren't "worth" as much as earlier bets, though, it's impossible to say how much this would affect the actually payoffs without having a specific example to consider, though.
Just tried this out and it looks promising: Google News search for "expected to" or "unlikely to".
I'd like to become better calibrated via PredictionBook and other tools, but coming up with well-specified predictions can be very time-consuming.
I don't think that's true. I think it's more that people don't like making predictions and seek excuses for their inability. Take predictionbook. I made a bunch of predictions about my own future weight. Making those predictions is quite easy for myself.
Other people can use that prediction predict whether I'm good at predicting my future weight, based on my past prediction book performance.
What's the feedback that I get on predictionbook? Are people willing to predict how good I'm at predicting? No. I get accused of spamming predictionbook.
When it comes to training calibration it's however probably good to have a claims where you don't have to wait to see whether you are right or wrong. There are many facts which truth value can be clearly determined but where the average person isn't sure whether the fact is true.
Take a good university textbook. Search the textbook for factual claims where a novice could think that either A or B are true but the textbook clearly specifis that either A or B is true.
It a lot more effective to train your calibration on textbook level questions than to train to be better calibrated at guessing which politician wins an election or which team wins the NBL.
CFAR's Credence game would profit from moving into the direction of meaningful questions the way I describe in http://lesswrong.com/r/discussion/lw/fn0/credence_calibration_game_faq/7ymq
What's the feedback that I get on predictionbook? Are people willing to predict how good I'm at predicting? No. I get accused of spamming predictionbook.
We are willing to do that for a few predictions; but when you make a ton of predictions which you refuse to mark private and which have hit diminishing returns and which are actively interfering with the ability to monitor every other prediction's activity by flooding them off the Happenstance page, then don't be surprised if the language gets stronger!
I've also found that it's hard to come up with predictions which are both well-specified and interesting.
I don't doubt that it's a hard problem. I doubt that it's inherently time consuming. There are mental barrier that you have to cross. Crossing those mental barries is hard.
If you want 250 new predictions/month here's something you can do:
Install RescueTime. For your the 10 most visited websites you make predictions about the upper limit for average time that you spend on those websites for the next week and the next month. You make predictions at the 10% 25%, 50% 75% and 90% level.
At the end of every week you make new predictions for the next week. At the end of every month you make predictions for the next month.
Coming up with the idea of using RescueTime as basis for prediction takes creativity. Is something that's hard for most people. I spent several years thinking about the problem to come to a place where it doesn't take much time for me to come up with predictions.
Actually making those prediction is not very time consuming. It takes a lot less time per prediction than the approach of browsing the various sites lukeprog proposes to look for interesting predictions.
It a lot more effective to train your calibration on textbook level questions
What makes you think so?
Most predictions in daily life don't include making prediction about sports or about which politician get's elected. Most meaningful predictions that I make in my daily life aren't of the type you would find on intrade.
How often do you make a decision in your daily life where it matters which sport team wins? In my life that doesn't happen. Most of my personal decisions are also not depended on which politician's win an election.
To get educated you sent students into university where they try to learn the knowledge in textbooks, Students who seek to study sport focus on studying sport statistics. Students who study politics don't focus on studying which politician won which elections.
Most of the knowledge that people can aquire is outside of the category of predictions you find on Intrade.
If people want to learn how the world works reading textbooks is better than reading the news. On the same token it makes sense to calibrate on textbook knowledge.
Calibrating on actual personal events is also good. That means that you get better at predicting other personal events.
Most predictions in daily life
...aren't textbook level questions either; the first two paragraphs of your reply strike me as irrelevant to my question.
Textbooks are indeed used in education; that doesn't establish whether what educates most effectively, also happens to be what most effectively trains calibration. We have strong reason to doubt that: namely, that many well-educated people are also poorly calibrated.
On the other hand, I'm not aware of strong evidence to the effect that textbook questions are more effective in training calibration than any other type of question (including sports or world events or estimation quizzes, and so on).
[Most predictions in daily life]...aren't textbook level questions either
That depends. For a student who spends 8 hours per day with learning for university many questions boil down to textbook knowledge. For a scientist who does biology research it's also very important that the scientist has a firm grasp about various biology questions that are based on textbook knowledge.
Good rationality training is supposed to make a scientist who studies biology better at biology.
We have strong reason to doubt that: namely, that many well-educated people are also poorly calibrated.
I don't think that there are many people who are calibrated on their knowledge of textbook questions.
Let me give you an example: Question: Which enzymes catalyse RNA synthesis? A) RNA polymerases B) RNA telomerases
The person who answers the question has to say either A or B and predict how likely he's right.
During most university causes students aren't asked how likely they think they are right. As a result the students aren't well calibrated on being right.
It seems to me this could be a smartphone app. Whenever a person wants to make a prediction about a personal event, they click on the app and speak, with a pause between the thing and how likely you think it is. The app could just store verbatim text, separating question/answer, and timestamping recordings in case you want to update your prediction later. If you learn to specify when you think the outcome will occur, it can make a sound to remind you to check off whether it happened; otherwise it could remind you periodically, like at the end of every day. Why couldn't it have data analysis tools to let you visualize calibration, or find useful patterns and alert you? Seems a plausible app to me.
I've found that predicting how many/which people will show up to various events and how late people are going to be to meetings are both good prediction formats. They're easy to formulate, and for most people they will come up frequently in day to day life. Also, somewhat similar to ChristianKl's suggestion I've found "I will get at least a [letter grade] on [assignment]." to be both easy to generate and at least semi-relvant to the average student.
Yo, mods! This looks like a phishing attempt to me. I would NOT recommend following that link unless you have a disposable VM handy.
If you really think you've found a way to get good at making predictions, playing the stock market seems like the obvious thing to do. A friend of mine has consistently made good money trading stocks over the past few years, so I don't think it's impossible.
DAGGRE may be useful: http://daggre.org/info/ (I haven't signed up to see if it's just variants on IARPA questions or what.)
EDIT: browsing the site a little, it looks like mostly IARPA questions.
I'd like to become better calibrated via PredictionBook and other tools, but coming up with well-specified predictions can be very time-consuming. It's handy to be provided with a stock of specific claims to make predictions (or post-dictions) about, as with CFAR's Credence Game.
Therefore, I asked Jake Miller and Gwern put together a list of prediction sources. Feel free to suggest others!
Prediction Sites