I've always appreciated the motto, "Raising the sanity waterline." Intentionally raising the ambient level of rationality in our civilization strikes me as a very inspiring and important goal.

It occurred to me some time ago that the "sanity waterline" could be more than just a metaphor, that it could be quantified. What gets measured gets managed. If we have metrics to aim at, we can talk concretely about strategies to effectively promulgate rationality by improving those metrics. A "rationality intervention" that effectively improves a targeted metric can be said to be effective.

It is relatively easy to concoct or discover second-order metrics. You would expect a variety of metrics to respond to the state of ambient sanity. For example, I would expect that, all things being equal, preventable deaths should decrease when overall sanity increases, because a sane society acts to effectively prevent the kinds of things that lead to preventable deaths. But of course other factors may also cause these contingent measures to fluctuate whichever way, so it's important to remember that these are only indirect measures of sanity.

The UN collects a lot of different types of data. Perusing their database, it becomes obvious that there are a lot of things that are probably worth caring about but which have only a very indirect relationship with what we could call "sanity". For example, one imagines that GDP would increase under conditions of high sanity, but that'd be a pretty noisy measure.

Take five minutes to think about how one might measure global sanity, and maybe brainstorm some potential metrics. Part of the prompt, of course, is to consider what we could mean by "sanity" in the first place.

~~~ THINK ABOUT THE PROBLEM FOR FIVE MINUTES ~~~

This is my first pass at brainstorming metrics which may more-or-less directly indicate the level of civilizational sanity:

  • (+) Literacy rate
  • (+) Enrollment rates in primary/secondary/tertiary education
  • (-) Deaths due to preventable disease
  • (-) QALYs lost due to preventable causes
  • (+) Median level of awareness about world events
  • (-) Religiosity rate
  • (-) Fundamentalist religiosity rate
  • (-) Per-capita spent on medical treatments that have not been proven to work
  • (-) Per-capita spent on medical treatments that have been proven not to work
  • (-) Adolescent fertility rate
  • (+) Human development index

It's potentially more productive (and probably more practically difficult) to talk concretely about how best to improve one or two of these metrics via specific rationality interventions, than it is to talk about popularizing abstract rationality concepts.

Sidebar: The CFAR approach may yield something like "trickle down rationality", where the top 0.0000001% of rational people are selected and taught to be even more rational, and maybe eventually good thinking habits will infect everybody in the world from the top down. But I wouldn't bet on that being the most efficient path to raising the global sanity waterline.

As to the question of the meaning of "sanity", it seems to me that this indicates a certain basic package of rationality.

In Eliezer's original post on the topic, he seems to suggest a platform that boils down to a comprehensive embrace of probability-based reasoning and reductionism, with enough caveats and asterisks applied to that summary that you might as well go back and read his original post to get his full point. The idea was that with a high enough sanity waterline, obvious irrationalities like religion would eventually "go underwater" and cease to be viable. I see no problem with any of the "curricula" Eliezer lists in his post.

It has become popular within the rationalsphere to push back against reductionism, positivism, Bayesianism, etc. While such critiques of "extreme rationality" have an important place in the discourse, I think for the sake of this discussion, we should remember that the median human being really would benefit from more rationality in their thinking, and that human societies would benefit from having more rational citizens. Maybe we can all agree on that, even if we continue to disagree on, e.g., the finer points of positivism.

"Sanity" shouldn't require dogmatic adherence to a particular description of rationality, but it must include at least a basic inoculation of rationality to be worthy of the name. The type of sanity that I would advocate for promoting is this more "basic" kind, where religion ends up underwater, but people are still socially allowed to be contrarian in certain regards. After all, a sane society is aware of the power of conformity, and should actively promote some level of contrarianism within its population to promote a diversity of ideas and therefor avoid letting itself become stuck on local maxima.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 12:58 PM

Sidebar: The CFAR approach may yield something like "trickle down rationality", where the top 0.0000001% of rational people are selected and taught to be even more rational, and maybe eventually good thinking habits will infect everybody in the world from the top down. But I wouldn't bet on that being the most efficient path to raising the global sanity waterline.

That's not CFAR theory of action. CFAR's strategy rests on the idea that researching into new thinking habits is very important.

(-) Per-capita spent on medical treatments that have not been proven to work

I don't think "proven to work" should be the metric anybody should use for picking medical treatments. The likely effects of the treament given it's costs and risks is a much better guide.

EY is not irrational because he tried the Shangri La diet. Or for building his own high powered lamp array to treat an illness.

(+) Median level of awareness about world events

That depends a lot on how you define awareness. Many 9/11 truthers do better at quizes about world events than the average person. That doesn't stop them for believing in bad conspiracy theories.

In my time at skeptics.stackexchange I noticed a pattern. Many 9/11 truthers seem incapable of writing a question that's within the rules of skeptics.stackexchange. They want general questions like "Did the US government lie about 9/11" instead of specific questions like "Have two dozen members of Osama bin Laden's family been urgently evacuated from the United States in the first days following the terrorist attacks?"

It seems that those people go wrong by having access to a lot of information but avoiding to critically check individual claims because they only focus on the big claims.

People who don't follow world affairs and who know they aren't really informed are less of a problem than people who do follow world affairs and who get it wrong in the way 9/11 truthers do.


When it comes to metrics, good Briers score (or an similar metric) show sanity of the person who's measured.

That's not CFAR theory of action. CFAR's strategy rests on the idea that researching into new thinking habits is very important.

Regardless, CFAR isn't trying to raise the sanity waterline, and it isn't likely to do so by accident.

I don't think "proven to work" should be the metric anybody should use for picking medical treatments. The likely effects of the treament given it's costs and risks is a much better guide.

EY is not irrational because he tried the Shangri La diet. On for building his own high powered lamp array to treat an illness.

There are arguments for and against most of these metrics. For example, in a more sane world, the powers that be allocate resources toward either proving or disproving the efficacy of the Shangri La diet. Thus, people would be able to find out by reading the literature whether it was a good diet, one way or the other. We live in an insane world where nutrition science is stuck in an epistemically toxic equilibrium, so individual experimentation is not a crazy thing to do.

That depends a lot on how you define awareness. Many 9/11 truthers do better at quizes about world events than the average person. That doesn't stop them for believing in bad conspiracy theories.

I suspect that, all things being equal, more accurate world-knowledge is better for global sanity.

Consider the converse proposition. It would be much harder to defend the notion that "less knowledge about the world and current events is a predictor/indicator of greater rationality." It would also be relatively hard to justify a zero-correlation between world awareness and rationality.

All this goes toward proving that we need a variety of indicators. As with the 9/11 truthers, you can have a lot of world knowledge but still be lacking in the "complete basic rationality package" that Eliezer outlines and I try to summarize, and thus you still make horrible choices. A little rationality can hurt you, after all. Thus, the goal should be to track more metrics, and to make sure we're not trading them off against one another.

When it comes to metrics, good Briers score (or an similar metric) show sanity of the person who's measured.

Thanks for the suggestion. I don't think this is a metric that is being widely collected, but maybe it should be.

"Proven to work" means that you have peer reviewed studies that show that in a controlled trial the group that get's the treatment does better.

On the other hand the phrase doesn't say anything about the size of the benefit of the treatment. The antidepressents that are proven to work produced a 1.8 point improvement on a 50 point scale in the Kirsch (2008) meta review. The size of the positive effect is important for rational decisions about which treatment to pursue.

Lastly cost of treatment matters. You can argue that the FDA approved fish oil pills are more "proven to work" than non-FDA approved fish oil pills. I don't think that it's rational to pay pick the FDA approved fish oil pills for a person who wants to take fish oil pills and has to pay them themselves.

Consider the converse proposition. It would be much harder to defend the notion that "less knowledge about the world and current events is a predictor/indicator of greater rationality." It would also be relatively hard to justify a zero-correlation between world awareness and rationality.

The key question is what you mean with "world awareness". You could call European journalists a class of well-informed people with world awareness. At the same time they don't beat the chimps at Hans Rosling's quiz about African development.

You have to be careful that you measure what you want to measure. Newspapers have a strong ability to let people believe in narratives that are wrong.

Maybe the proper approach to raising the worldwide sanity needs to be multi-layered, simply because different people are at different levels of sanity and education. It makes no sense to debate "frequentism vs Bayes" with someone who cannot solve a quadratic equation, which describes the majority of the population.

(No exaggeration here. Have you ever tried talking to random adult people on the street and asking them so solve a simple quadratic equation such as "x^2 + 4x + 3 = 0"? I did. The short version is that your best hope is to find a high-school math teacher, and even then the success is far from guaranteed.)

For constructing a Friendly superhuman self-improving AI, you need a relatively small team of top-notch rationalists. But for raising the sanity waterline, you need the very basics, but you need to be able to make people listen, and you need your system to scale well (if you hope to increase the sanity of millions or billions of people). Those are two quite different things.

And if we are talking about these lowest levels, such as literacy, many people are already trying to solve this problem. Maybe there are some reasons why they systematically produce subpar results. But maybe they work within a system that has bad incentives, but you could step outside of the system. -- This could be a myth, but people told me that Rowling's Harry Potter books doubled the amount of kids that visited libraries to borrow books. I wonder if some functional literacy statistics reflect this. Maybe someone could write a book about a wizard kid who used math to achieve great powers, and suddenly the math skills would skyrocket. -- Some kind of educational intervention completely bypassing the formal educational system; maybe we could invent it, and use the internet to put it in action.

Maybe the proper approach to raising the worldwide sanity needs to be multi-layered, simply because different people are at different levels of sanity and education. It makes no sense to debate "frequentism vs Bayes" with someone who cannot solve a quadratic equation, which describes the majority of the population.

This makes sense, and I think it's possible to boil down some concepts to simpler forms that are nonetheless still useful. In a wide variety of situations, I can help people clear up confusions about their "beliefs" by showing them how to treat all beliefs as different explanations that they are implicitly putting more or less "weight" on. It certainly helps with any confusion of the "I'm not sure what I really believe" variety, to recontextualize the problem such that the person can believe many things with a variety of certainty levels. I never have to resort to probability theory or mention Bayes.

Maybe someone could write a book about a wizard kid who used math to achieve great powers, and suddenly the math skills would skyrocket.

So, basically HPMOR with more mainstream appeal. HPMOR was actually a pretty good stab at drawing the interest of the IQ 110+ crowd, but I'll bet a similar thing could be done for the other half of the bell curve.

but I'll bet a similar thing could be done for the other half of the bell curve

Got to include porn, then.

Whether or not you're joking, you're right.

The results of my five minutes of thinking:

take sample of group you want to measure sanity for:

  • productivity
  • goal achievement
  • correct predictions, especially correct contrarians
  • ability to recognize fallacious thinking
  • willingness to engage with political opponents
  • ability to develop nuanced political opinions
  • ability to detect lies and deception in information sources

Went in a different direction than the post. The list I generated seems to have turned far more to abstract individual sanity ideas than things we already have numbers for.

A lot of the arguments you make revolve around correlation. That leaves the question of what purpose your measurement is supposed to have.

A measurement that measures a metric that correlates can be fine for some purposes. If you however optimize towards it you might run in Goodharts law problems.

A measurement that measures a metric that correlates can be fine for some purposes. If you however optimize towards it you might run in Goodharts law problems.

Though if something nice has twenty correlates, and you track all 20 of those correlates, you are probably better off than tracking one very strong correlate. (Improving IQ test scores without improving everything else that correlates with IQ would be problematic, for example.)

It would be really interesting to conceive of a dystopia where all of the proposed metrics so far suggested in this post+comments were optimized without also leading to markedly improved levels of sanity.

We look at the moment at the possibility of a SENS board member becoming head of the FDA. A person who wants that the FDA let's products without proven benefits on the market, if those are proven safe.

If you use the metric about people taking medical treatments that have not been proven to work, that might be a bad step. On the other hand I think most of this community would likely find a SENS board member at the head of the FDA really great.

To top it, the same person cofounded Thiel's 20 under 20 were people were payed to drop out of school. That doesn't go well with the goal of increasing formal education.

(+) Enrollment rates in primary/secondary/tertiary education

How are you defining 'education' here? Does homeschooling count? What about trade schools? Apprenticeships?

If a society had a college education rate of, say, 98%, would it have a higher or lower sanity waterline than a society with a college education rate of 30% where most of the other 70% went into employer-funded job training, apprenticeships, etc.?

And education depresses fertility. Until widespread genetic engineering or FAI, the values of populations whose fertility rate is above (replacement rate + defection rate) will gain support, and the values of populations whose fertility rate is below it will lose support. This especially matters in democracies, as anyone who follows Israeli politics can tell you. What this means is that, even if raising tertiary education rates raises the sanity waterline in the short term (which I'm not convinced of), it will likely lower it in the long run.

(+) Median level of awareness about world events

Why? Rationalists win. To the extent that awareness about world events helps you win, awareness about world events is rational. To the extent that awareness about world events does not help you win, you may as well be an anorak.

(-) Religiosity rate

How do you square this with the scientific consensus? Again, rationalists win. If you interpret the relevant studies as saying that religious people accrue benefits (social capital, a sense of meaning, a support network, etc.) from religion (rather than irreligion selecting against the personality traits that provide those things), you have to make the case that the epistemic rationality gains outweigh the instrumental rationality losses to the median human in the society you're trying to affect, and that either these gains outweigh the losses from changing religion/engineering new religions or 'saner' religions can't be created and our only choice is between Richard Dawkins and Creflo Dollar.

(-) Adolescent fertility rate

I would expect a society where 18-year-olds are financially and morally (and... wisdom-ly?) capable of raising children to have a higher sanity waterline than a society where, for financial, moral, and... wisdom-al? reasons, reproduction has to be deferred to one's early thirties, unless the simplest way to raise the sanity waterline is to increase the rate of autism.

How are you defining 'education' here? Does homeschooling count? What about trade schools? Apprenticeships?

The UN statistics that I linked only record what we could consider to be mainstream education -- school, university, graduate school/professional degrees. A truly comprehensive sanity metric would go beyond this.

If a society had a college education rate of, say, 98%, would it have a higher or lower sanity waterline than a society with a college education rate of 30% where most of the other 70% went into employer-funded job training, apprenticeships, etc.?

That's true. One has to be on the lookout for pathological social trends masquerading as widespread rationality. For example, the current US attitude that you have to go to college is looking less and less rational by the day. That said, in some other country with 10% high school graduation rates and zero universities, I would consider any increase in those numbers to be a sanity improvement.

And education depresses fertility. Until widespread genetic engineering or FAI, the values of populations whose fertility rate is above (replacement rate + defection rate) will gain support, and the values of populations whose fertility rate is below it will lose support. This especially matters in democracies, as anyone who follows Israeli politics can tell you. What this means is that, even if raising tertiary education rates raises the sanity waterline in the short term (which I'm not convinced of), it will likely lower it in the long run.

There are a lot of complicated things going on with fertility rates. As a society becomes more sane, it also becomes more stable, and its overall fertility rate declines to the level of "elective" reproduction. In a truly sane society, one might imagine fertility rates ticking back up again, as overeducated adults actually obtain comprehensive financial security and feel comfortable having more children. I see myself as a possible example of this. My wife and I both have advanced degrees, but we chose to have three children because we can actually expect to provide three children with the kind of life that overeducated adults like us think they deserve.

Why? Rationalists win. To the extent that awareness about world events helps you win, awareness about world events is rational. To the extent that awareness about world events does not help you win, you may as well be an anorak.

I see "more knowledge about the world state" as being implied within the Omohudro drives. I agree that there is a saturation point. It is useful to know that there is a really bad war in Syria. It is not useful to know that parking fines in Singapore have increased by 4% in the last quarter. Unless you're traveling to Singapore. One must diligently balance gathering information versus utilizing information.

I think knowing a basic slate of facts about current events would be well correlated with sanity. What goes on this slate would be flexible and subjective, but that doesn't imply it would be useless as a measure.

How do you square this with the scientific consensus? Again, rationalists win. If you interpret the relevant studies as saying that religious people accrue benefits (social capital, a sense of meaning, a support network, etc.) from religion (rather than irreligion selecting against the personality traits that provide those things), you have to make the case that the epistemic rationality gains outweigh the instrumental rationality losses to the median human in the society you're trying to affect, and that either these gains outweigh the losses from changing religion/engineering new religions or 'saner' religions can't be created and our only choice is between Richard Dawkins and Creflo Dollar.

We're sitting at a weird point in history where we have dynamited all our social institutions except religion, so it makes religion look artificially appealing. I don't think statement "epistemic rationality gains outweigh the instrumental rationality losses to the median human" is true. I think 95% of religious people have never even been exposed to even a basic level of rationality and don't even know what it could do for them, much less for society.

Regardless of that, I think it is demonstrably true that countries with lower religiosity are, on the balance, more sane. It's not always the case, because there are other bad ideas that can take the place of religion. Thus, the need for many aggregated metrics in coming up with a final "sanity score".

I would expect a society where 18-year-olds are financially and morally (and... wisdom-ly?) capable of raising children to have a higher sanity waterline than a society where, for financial, moral, and... wisdom-al? reasons, reproduction has to be deferred to one's early thirties, unless the simplest way to raise the sanity waterline is to increase the rate of autism.

I don't think 18-year-olds can be wisdom-ly capable of raising children. But, 18 is not really adolescent. It's uncontroversial to say that a well-structured society has fewer pregnant 13-year-olds.

That's true. One has to be on the lookout for pathological social trends masquerading as widespread rationality. For example, the current US attitude that you have to go to college is looking less and less rational by the day. That said, in some other country with 10% high school graduation rates and zero universities, I would consider any increase in those numbers to be a sanity improvement.

This sounds like an exploration/exploitation problem. If every society heads for the known maximum of sanity, it'll be much more difficult to find higher maxima that are yet unknown. If the USA had headed for the known maximum of sanity after seceding from the British Empire, we'd have a king.

Just as it seemed clear to the revolutionaries that the known maximum of sanity in government was suboptimal, it seems clear to me that the known maximum of sanity in education is suboptimal. High school on the Prussian model is about burning years of life in order to be socialized into government-promoted cultural norms and be prepared for work where discipline matters more than thought -- e.g. industrial jobs and the military. College on the American model is about burning years of life (and taking on massive amounts of debt) in order to be socialized into academia-promoted cultural norms and obtain a certificate that says, essentially, "this person is allowed to work". Although it's probably true that most existing societies with 10% high school graduation rates and zero universities rank lower in sanity than the USA, it's also probably true that the USA is, modulo technological improvement and the increase in conceptual vocabulary that flows from that, less sane now than it was before the GI Bill, Griggs v. Duke, etc., because it's completely viable and even promoted for 22-year-olds to have no work experience, a mountain of debt, and a head full of nonsense. If people could, say, test into paid job-training programs -- internships, apprenticeships, etc. -- at the age of 16, and if this were the mainstream life path, this would be a sanity improvement: the resulting 22-year-olds would have financial stability, six years of work experience, markedly less political indoctrination, and no mountain of debt taken on to pay parasitic political radicals for a "see, I'm not banned from working!" certificate.

The only downside I can see is the potential effect on basic research, but I'm not sure how significant that would be.

We're sitting at a weird point in history where we have dynamited all our social institutions except religion, so it makes religion look artificially appealing. I don't think statement "epistemic rationality gains outweigh the instrumental rationality losses to the median human" is true. I think 95% of religious people have never even been exposed to even a basic level of rationality and don't even know what it could do for them, much less for society.

What could it do for them? If, say, health and an extended lifespan are saner, how do the downsides of being, say, a Seventh-Day Adventist outweigh the known upsides? (Remember that most people are much better at compartmentalization than most LW posters, and that decreases in religion don't mean decreases in folk magic -- if anything, the atheistic communities I've seen outside LW are heavier on folk magic than the religious ones I've seen. The other side of that, however, is that some folk magic can be legitimately useful -- but astrology and MBTI don't strike me as falling inside that category.)

What could it do for them?

I suspect, at a moderate level of certainty, that epistemic blindspots of the type that religion requires are highly toxic to both individual and society-level rationality.

But let's stipulate for the sake of argument that a Seventh-Day Adventist in modern times is precisely no worse off in their day-to-day life due to their religious beliefs. That Seventh-Day Adventist still has to live in a world where "we respect the beliefs of everyone", which is code for "fantasy and magical thinking centered on ancient books have to be continuously considered in a wide variety of public policy discussions".

If the sanity waterline were truly raised to the point that religion "goes underwater", then we would only have to deal with the normal human failure modes of discourse that occur on a civilizational level, which are already pretty bad, without having to also juggle the policy desires of religionists. So, choosing a saner civilization means you accrue the benefits of a saner civilization.

Of course, I don't actually think that an individual Seventh-Day Adventist is "no worse off" than a rationalist. It's the work of seconds to dream up a wide variety of situations and scenarios in which the religionist is obligated to make objectively bad concrete choices to preserve their self image, which a rationalist wouldn't be forced to make. In exchange, the religionist gets some theoretical enhanced community support (not something I ever experienced when I was in a religion) and a nice, clean, settled ontology that comforts them regarding death. Still doesn't seem worth the tradeoff to me.

Religion requires epistemic blindspots, but does religion require epistemic blindspots? That is, is requiring epistemic blindspots a property of religion itself, or is religion one among many subclasses of the type of thing that requires epistemic blindspots? In the former case, raising the sanity waterline to specifically eliminate religion would raise the sanity waterline; in the latter case, it might lower it.

What do you think would happen to the sanity waterline if all the Seventh-Day Adventists in America became atheists and joined an antifa group? Would it rise?

Seventh-Day Adventists' epistemic blindspots (from the atheistic perspective) are things like "God exists" and "we'll live forever in Heaven because we're right about when the Sabbath is" and "eventually the Catholic Church, mainstream Protestant groups, and the US government will get together to pass a law requiring observance of a Sunday Sabbath, and we'll be horribly persecuted for a while but it's OK because Jesus will come back soon after that". Antifa groups' epistemic blindspots are things like "liberal norms serve fascists and must be eroded ASAP", "mob violence is the most important form of political activism", and "murder is good when it's people we disagree with getting killed".

And Seventh-Day Adventists are more prone to epistemic blind spots than religions that don't share the unusual Christian innovation of elevating orthodoxy above orthopraxy, such as Shinto or mainstream American Judaism, both of which are clearly religions. (We have quite a few adherents of mainstream American Judaism in these circles; try asking a few of them about the utility of ritual, the upsides and downsides of religion, etc.)

Religion is one among many subclasses of the type of thing that requires epistemic blindspots, whatever that thing is. But there's another problem, which is that religion doesn't exist. The consensus in religious studies is that there's no coherent way to define 'religion' -- the category exists for strange historical reasons that are particular to the pre-secularization West and certainly don't hold everywhere. You can go to China or Japan or ancient Rome and ask, "is this religious? is this secular?", and they'll just look at you funny. (Admittedly, there's a complication, in that contact between 'pagans' and Christians or Muslims occasionally results in the local variety of paganism adopting the Christian or Muslim idea of 'religion' -- see e.g. here.)

Is Confucianism a religion? It has rites, holy texts, a quasi-prophet (Confucius) and influential quasi-theologians, such as Mencius, Dong Zhongshu, and Zhu Xi. How about Communism, the Hotep movement, or LW? What makes Louis Farrakhan a religious figure and Maulana Karenga a secular one?

I have no interest in "targeting" religion for annihilation, or anything like that. I don't disagree with anything you say here. Religion is just one subset of a class of failure mode that theoretically goes underwater when a society becomes saner. For the sake of defining my terms, I guess I'm just using "being religious" as a catchall for "possessing ontological beliefs that are not grounded in empirically knowable facts", but I'm not really interested in defending the details of that definition. I think people know what cluster in thingspace I'm pointing to when I say "religion".

Maybe I should have said something like this in the main post, but, consider a society that looks like ours except all school-aged children spend at least a semester studying the Human's Guide to Words section of the Sequences. How many absolutely stupid thoughts, beliefs, conversations would just never happen in that world? A lot of those thoughts/beliefs/conversations would be religio-centric, and a lot wouldn't be. The more "rationality interventions" you add, the fewer ostentatiously dumb things are permitted in the wider social milieu, and bad ideas "go underwater". That's the idea, anyway.

If the point is to use that UN database, then just go through the tree systematically asking "is this branch at all relevant?". Prune out the branches that aren't (you don't have to look at every leaf), and you're left with the more useful metrics. There's no need to brainstorm--it's just not that big.

If, on the other hand, you're willing to conduct your own surveys, then I think we can do better than this. Craft survey questions that measure more directly what you're interested in, instead of these proxies.

The waterline metaphor implies that there could be different levels of sanity. Someone sane enough to reject astrology might not be sane enough to sign up for cryonics, for example. So we might call the cryonics question more difficult than the astrology question. We don't have to decide exactly where questions rank on the scale. The data will tell you.

Easy questions might be about common superstitions, magical thinking, pseudoscience, and conspiracy theories. Common skeptics can get these. Questions about basic science literacy might be a little harder. (e.g. dihydrogen monixide, evolution). There are also questions illustrating the various cognitive biases more directly as we've seen in the Sequences. And finally the "correct contrarian" positions are more difficult, like cryonics, many-worlds, molecular nanotechnology, existential risks, etc..

My five minutes thoughts worth.

Metrics that might useful (on the grounds that in hindsight people would say that they made bad decisions): traffic accident rate, deaths due to smoking, bankruptcy rates, consumer debt levels.

Experiments you could do if you could randomly sample people and get enough of their attention: simple reasoning tests (e.g. confirmation bias), getting people to make some concrete predictions and following them up a year later.

Maybe something measuring people's level of surprise at real vs fake facebook news (on the grounds people should be more surprised at fake news) ?