The newspaper "Le Monde" made an interesting exercise for rationalists in the context of the French election.

They first made a classical "which is your best candidate" poll, in which they ask multiple choice questions about various topics (for each question, you must select one answer, and how important the issue is for you), and at the end, they select the candidate that (according to them) is closest to your answers. Nothing new in that.

But then, they made a much more interesting (at least from a rationality training point of view) exercise : they asked the same questions, but not asking "what is your opinion on the topic ?" and "how is this question important for you ?" but they asked "what do you think the majority of our readers answered ?" and "how do they think they rated the importance of this issue ?". And then they give you a score from 0 to 1000 on how good your "predictions" were.

It's in French, so it'll be hard for most of you to try it, but if you want it is available online here.

I found this kind of exercise (trying to guess what other people will have answered) to be interesting from a LW point of view, because it somehow makes your beliefs (about the opinions, priority and mentalities of French people) pay rent.

So I wanted to share it with the LW community, and ask if you know about similar exercises elsewhere, that gives you a way to check how accurate your belief network is in complicated issues, if you find them interesting too, and how they could be improved.

As an idea of improvement, I would like adding a confidence rating to each question, the more confident you feel in your answer, the more points you get if you get it right, but the more you lose if you get it wrong.

New to LessWrong?

New Comment
11 comments, sorted by Click to highlight new comments since: Today at 6:57 AM

what do you think the majority of our readers answered

See Bayesian Truth Serum. I'm eager to one day discover a bona fide implementation of BTS as a Web app. This isn't quite it, because BTS requires that respondents are actually encouraged to get a "high score", and scoring isn't only based on accurately guessing the popular answer.

Can we come up with a good measure of rationality along these lines?

Calibration for starters.

I remember having some app for the Wii where they'd ask random questions each day, and you'd give your answer, along with what you believe is the mos common answer. I noticed that with a lot of questions, people would have done better guessing that everyone else agreed with them.

[-]TimS12y60

I faintly recall a study about finding wallets where participants predicted they would not steal money but that most people would. This is some evidence that people think they meet a higher moral standard than the community as a whole. But that seems somewhat contradictory to my previous assertion.

I'm suddenly concerned that I'm just generating Fake Explanations

This app is called the “Everybody Votes Channel” (at least in the US), and it is a free download (if you have a Wii).

It tracks your prediction accuracy and some other figures, and you can also view a breakdown of question responses (and prediction correctness) by region, gender and maybe some other attribute I've forgotten. The questions are generally trivia about personal habits or not-obscure-but-not-daily-knowledge facts.

[-]TimS12y40

Interesting. For some portion of the population, I expect their prediction of others to center around their own political beliefs, for "Politics is the mind-killer" / "Mind-projection" reasons. I wonder if there is a way to confront people with this, as a way of improving rationality?

Indeed. And even for an "in training rationalist" (I did read the Sequences and try to be a rationalist, but I don't consider myself "black belt"), I found it quite hard to dissociate the "what I think is the best answer" from "what I think people think is the best answer", it required some kind of mental effort, more than I would have said it requires beforehand.

As an anecdote, I had an opposite slight tendency to go for what seemed like the worst answer and I had to switch answers twice because of this.

[-]gjm12y20

That was quite interesting. I got 728/1000 despite not knowing anything much about French politics and speaking only quite rusty French; I've no idea whether that's actually good or bad. It's interesting to see how France's Overton window differs from that of the UK (where I live) or the US (whose politics are highly visible on the net).

I got a score of 656/1000, which doesn't seem all that great, but it told me that it was a good score.

I used to follow French politics quite closely, but haven't in a while. For this quiz I used a simple algorithm based on a superficial model of the typical Le Monde reader: pick the most "centrist" of the "left-wing" responses, and rate everything around three stars in importance.