Lately, I've been thinking about the class of things that I'm calling "Civilizational Sanity Interventions." With that term I'm meaning to refer to technologies, institutions, projects, or norms that, if implemented, would improve the quality of high level decision making about important issues.

Which things if they existed in the world, would make our society, collectively, saner?

Some examples (with which I expect most people around here to be familiar):

Prediction markets

Prediction markets are a clever way to aggregate all the available information to make accurate predictions.

Robin Hanson posits that the reason why there isn't wider adoption of prediction markets is because they are a threat to the authority of existing executives.

If we lived in a world where the use of prediction markets were commonplace standard practice, eventually, decision makers would face flack for acting against the predictions of the market, and pundits would have a lot less leeway to make inaccurate, politically-motivated predictions.

Hanson, in a recent interview,

I’d say if you look at the example of cost accounting, you can imagine a world where nobody does cost accounting. You say of your organization, “Let’s do cost accounting here.”
That’s a problem because you’d be heard as saying, “Somebody around here is stealing and we need to find out who.” So that might be discouraged.
In a world where everybody else does cost accounting, you say, “Let’s not do cost accounting here.” That will be heard as saying, “Could we steal and just not talk about it?” which will also seem negative.
Similarly, with prediction markets, you could imagine a world like ours where nobody does them, and then your proposing to do it will send a bad signal. You’re basically saying, “People are bullshitting around here. We need to find out who and get to the truth.”
But in a world where everybody was doing it, it would be similarly hard not to do it. If every project with a deadline had a betting market and you say, “Let’s not have a betting market on our project deadline,” you’d be basically saying, “We’re not going to make the deadline, folks. Can we just set that aside and not even talk about it?”

So pushing from this equilibrium, to the one where prediction markets are common, would improve our societies beliefs about just about everything that one could make a prediction market for.

Arbital (or something like it)

The pitch I heard for arbital went something like this...

[Please note that I am recalling conversations that I had back in 2016. This should not be taken as an authoritative summary of Arbital's vision or plans.]

In the old days, it used to be that when people disagreed about a simple matter of fact, there was not much recourse for resolving the disagreement. If you were committed, you could go to a library and try to research the answer, but most people didn't have the scholarship skills, nor the inclination to do that. (As an example, if two people got into a fight about the origin of the phrase "loose cannon", in pre-internet days, they might argue about it for years.
But Wikipedia changed that, because it made it easy to verify questions of settled fact. Now if you disagree about the origin of "loose cannon", you can just check Wikipedia (or in this case, Wikitonary). Wikipedia is reliable enough, and accessible enough, to be an authoritative source.
Thus, Wikipedia narrowed the scope of things that people could confidently assert, without any foundation. Because if it was the sort of thing you could check on Wikipedia, your conversation partner could just check, and you would loose social points for appearing like a confident idiot.
What Wikipedia did for settled facts, Arbital was aiming to do for still contentious topics.
For instance, questions of macroeconomic policy are pretty hard, and still controversial: even professional economists disagree about what the best approach is. But the fact that the question is not yet settled is often taken as license to promulgate any old opinion, regardless of how economically sound it is. Even though we haven't solved macro, doesn't mean there aren't some distinctly wrong answers. Arbital was aiming to be an authoritative source on the state of the discussion about such not-yet-settled topics, to further narrow the space of claims that a person can confidently assert, because they know that if they say something inane, someone might refute them with the relevant Arbital page.

Now of course, setting this as your goal is one thing, and actually designing a mechanism that is able to do this is another. And Arbital did not, in fact, succeed. But if something like this could be made to work, that would be a substantial boon to high level decision making.

In deed, even just educational tools that make it much easier to understand complicated topics might be a major help, under the (possible?) model that part of the reason why politicians and other high-level decision makers produces far from optimal policy, is that it is too hard, or too time consuming, to make sense of the conflicting arguments about, say, economics.

Electoral Reform

My understanding is that part of the reason our government is apparently so dysfunctional is that the electoral system is biased toward polarization.

A case in point is gerymanderying, whereby districts are drawn in such a way that congressmen are all but guarantied to win general elections, which disenfranchises voters, and polarizes both parties (because in order to keep your job, you only need to appeal to your base, not cater to citizens across the political spectrum).

Similarly, the first past the post system used in the United States gives rise to the spoiler effect, which penalizes third parties by increasing the odds that their least preferred candidate wins.

It seems like solving those underlying incentives problems would moderate law makers, which seems likely to produce saner outcomes.

Kick-starter / Free state project style platforms

Kickstarter is a solution to a class of collective action problems, funding the creation of products that many people would want, but no one person can afford to pay the upfront startup costs for.

It seems like there is a lot of room for collective action solutions like that to shine.

For instance, many scientists know that the statistical methods that they use are less than ideal, but it would be costly for their personal careers if they switched to better methods, while everyone else continued to use the old ones. To solve this, young grad students might all commit to abandon using p-values, so long as x% of their peers agree to do the same.


I want to collect as many ideas for Civilizational Sanity Interventions as I can. Does anyone else have other examples?

New Answer
New Comment

6 Answers sorted by

Sammy Martin

170

This is a tricky problem. The first-order answer seems to be 'have the right people in power', but that's not an actionable strategy. However, it's amazing what a difference just one or two people can make - apparently a major reason the UK didn't delay its lockdown even further and risk ending up like the US is just because of Dominic Cummings.

The two main angles are either making the marketplace of ideas / electoral system select for foresight and sanity more effectively or building institutions with specific remits that can stand aside from such pressures and make the right choices anyway. The first is really hard and the second is really dangerous. However, neither are impossible.

For the first, there's ordinary electoral reform. An interesting alternative was given in Against Democracy by Jason Brennan - he proposes a new form of epistocracy to better reach higher-quality decisions - you can judge his scheme for yourself.

For the second, building competent independent institutions and then handing off power, the track record is pretty mixed. Independent central banks come to mind as a good example, the recent horrible Coronavirus debacle with the CDC, FDA or Public Health England as an especially bad example. For how to do that sort of thing correctly, you might also want to look at all the things Dominic Cummings has proposed, starting with e.g. this, or this article on Westminster dysfunction. He likes prediction markets, but not exclusively - he talks about building decentralised institutions that can operate with a large degree of independence.

On the specific angle of being more sane with respect to X-risks, I tend to favour the second approach (independent institutions) because I think it likely has a bigger effect and is easier to pull off than raising the society-wide sanity waterline. Toby Ord spoke a lot about this in 'The Precipice'. As for why, here's Scott Alexander:

Average national IQ correlates well with GDP per capita and other measures of development. But is average national IQ really the right number to look at? “Smart fraction theory” suggests we should instead look at the range of top IQs, since the smartest people are most likely to drive national growth by inventing things or starting businesses or governing well. Now Heiner Rindermann and James Thompson (names you may recognize!) have given the hypothesis its most complete test so far, and found that yes, IQ at the 95th percentile correlates better with national development than at the 50th percentile. But I am a little skeptical of their results...

Having elite opinion be non-crazy matters a lot in situations like the one we're in right now. Don't make 'we need to improve public discourse' your plan A for avoiding this level of chaos. So as suggested here, we should hand off more and more stuff to expert boards with limited remits, follow the example of independent Central Banks which didn't turn into French-revolution style rationalist tyranny over the masses - starting with everything to do with catastrophic risks. Someone in the UK government apparently took that suggestion seriously. Just don't get Steven Pinker involved.

In writing this answer I somehow completely forgot to mention Garrett Jones' new book 10% Less Democracy, which essentially goes over every idea listed above along with many others!

johnswentworth

160

One idea I was thinking about over the last few days: academic hoaxes have been used many times over the past few decades to reveal shoddy standards in journals/subfields. The Sokal affair is probably the most famous, but there's a whole list of others linked on its wikipedia page. Thing is, that sort of hoax always took a fair bit of effort - writing bullshit which sounds good isn't trivial! So, as a method for policing scientific rigor, it was hard to scale up without a lot of resources.

But now we have GPT2/3, which potentially changes the math dramatically.

I'd guess that a single small team - possibly even a single person - could generate and submit hundreds or even thousands of bullshit papers, in parallel. That sort of sustained pressure would potentially change journals' incentives in a way which the occasional sting doesn't. There'd probably be an arms race for a little while - journals/reviewers coming up with cheap ways to avoid proper checks, bullshit-generators coming up with ways around those defenses - but I think there's a decent chance that the end result would be proper rigor in reviews.

This would just greatly increase the amount of credentialism in academia.

I.e., unless you're affiliated with some highly elite institution or renowned scholar, no one's even gonna look at your paper.

4johnswentworth
I agree this is a likely outcome, though I also think there's at least a 30% chance that the blackhats could find ways around it. Journals can't just lock it down to people the editors know personally without losing the large majority of their contributors.

This tries to solve the problem of 'bad papers getting published', but doesn't seem to touch 'good papers not getting published'.

Adam Zerner

90

Eliezer had a lot of interesting ideas in My April Fools Day Confession, where he talked about a fictional society called Dath Ilan.

My recollection of that piece was it was mostly about the fruits of a saner society. In terms of how to get there, the intervention was "have built a systematic science of rationality, 200 years ago."

Which is a fine plan, on the time scale of 200 years. But are there interventions to deploy in the meantime?

4Adam Zerner
I think the piece does point to some interventions that we could deploy right now that would improve high-level decision making. For example, Experiment With Promising Ideas: The line between what is the fruit of a saner society, and what is an intervention that will lead to a saner society seems blurry to me though. You could argue that Experiment With Promising Ideas is something that is impractical because we're not sane enough yet, and we have to get more sane first before trying to implement it. Or you could argue that it's something that we are capable of doing right now, and that it's part of the path towards sanity. Here are some other excerpts from My April Fools Day Confession that might address your question: * It’s not the evidence-based massage therapists who’ve been iterating their art with randomized experiments and competitions for 350 years * In the world of dath ilan, everyone learns at age 9 about Nash equilibria, and there is a concept of a making a collective and virtuous effort to get past them. So as soon as computers and batteries were good enough to autopilot electric cars in a system of tunnels, the thing was done. * And now I’m talking about how the economy worked, so I’ll go ahead and talk about some other things that dath ilan considered obvious. The medical profession was divided into junior diagnosticians, whose main job was to diagnose the obvious and know when the obvious had been called into doubt; and senior diagnosticians, who were highly paid and high-IQ and shadarak-trained, who could apply Bayes’s Rule in their sleep, and memorized all the prior probabilities, and had computers, and were graded on their probability calibrations. * By which I mean that there would be centralized development of movies you watched on your own, and the training-games you played in what I won’t insult by calling it a school, and experiments to find out which variations worked. * And even with respect to thorium power plants, China could offe

Pontor

80

Electoral reform: The proponents of Random Sample Voting make it sound pretty cool. Appendix 1 in this white paper gives an efficient summary: https://rsvoting.org/whitepaper/white_paper.pdf

Kickstartery things: Dominant Assurance Contracts (DACs) are similar to regular assurance contracts (including Kickstarter campaigns), except with tweaked incentives that attract pledges from otherwise indifferent parties. For explanation and discussion, I recommend these links: https://www.cato-unbound.org/2017/06/07/alex-tabarrok/making-markets-work-better-dominant-assurance-contracts-some-other-helpful http://jessic.at/writing/dac.pdf

Other: Vitalik Buterin wrote, "Conditional payments for paywalled content--after you pay for a piece of downloadable content and view it, you can decide after the fact if payments should go to the author or to proportionately refund previous readers". He also sketched out a mechanism by which mail recipients can price spammers out of their attention: https://ethresear.ch/t/conditional-proof-of-stake-hashcash/1301 I like these two ideas because they directly help individuals economize their own attention, even if they aren't exactly civilizational sanity interventions in the way you're talking about.

I like Buterin's conditional payments proposal. Ensures a reasonable net price for content, proportional to quality of the content, and it allows for punishing clickbait, while removing personal incentive to cheat good producers out of deserved rewards.

It would especially be useful to help alleviate the refund controversy that's been going on with videogames

Regarding DACs: I think a sponsor of an initiative implementing DACs serves as an antisignal for confidence in the project's potential for success, thereby indicating lack of confidence in the proposal's compellingness, plus (theoretically at least) a project which would succeed with DACs would be highly likely to be crowdfunded anyways, and combine that with a risk of having a vague resemblance to lay people with Ponzi schemes, and it may explain the current lack of popularity of DACs, despite having been known and easily feasible for >20 years

I guess ... (read more)

Vaniver

70
My understanding is that part of the reason our government is apparently so dysfunctional is that the electoral system is biased toward polarization.

While I think better voting systems would be better (score voting or approval voting seem like clear improvement over the status quo), the electoral system has been this way for a long time, but polarization has increased dramatically recently. That suggests to me it's not downstream of the voting system, and simple fixes to the voting system won't solve it.

Note also that politicians will strategically choose to be less polarizing, if being less polarizing is the recipe for electoral success. (Or less-polarizing politicians will be the ones who succeed and become prominent contributors to national conversation.) And people take cues from politicians, they don't just elect politicians who agree with their fixed opinions. So anyway, I guess I'm saying, there isn't a clean upstream / downstream flow, I think...

I think you're probably right, but I'm also not sure how much can can infer from the analysis as stated. Maybe you need both First Past the Post and Facebook for things to get this bad, and fixing only one of those things is sufficient.

I guess one way to check would be to compare to other countries with better electoral systems. Are they suffering from the same extreme Left-Right polarization as the US?

2Eli Tyre
This also leaves me curious. Do other countries have the equivalent of Fox news (ie news specifically for one side of the tribal divide, constantly attacking the other side)? To be clear, the so called "Liberal Media" / "mainstream media" also contains a lot of tribal narrativization, but Fox news is special (I think?) in being the only major TV news outlet that deviates, and pushes an opposite and antithetical narrative.
3Egon Freeman
Have a look at Poland - in a way, the late 90s is an exercise in failures of a multi-partisan system, whereas the current state (mid-2020) is an exercise in failures of a bi-partisan system (that Poland seems to be hurtling towards). We've recently (as recent as over the last decade) seen an emergence of media outlets that tend to be rather clearly biased - enough that, at this point, the general population is often quick to associate a particular TV station with a particular party, even. Obviously, there have always been divisions in this regard, where a media outlet was considered "leftist" or "rightist", or whatever... but it feels as though it hasn't been much of a debate in 2008, and in 2020 it feels as if it's mentioned in every conversation. It has gotten to the point where both sides will accuse you of reading "fringe media" if you opt for something neither side has their (perceived) fingers in.

ChristianKl

20

Expertise measurement via credence calibration. I wrote Prediction-based-Medicine to layout the concept for medicine. 

It's also applicable to a variety of other professionals who make a lot of decisions that have clear measured outcomes. If you for example look at the people filing parole boards you can let them predict recivism rates. 

Government burocrats who predict how variables will be in the future can be scored on credence. 

9 comments, sorted by Click to highlight new comments since:

Robin Hanson posits that the reason why there isn’t wider adoption of prediction markets is because they are a threat to the authority of existing executives.

Before we reach for conspiracies, maybe we should investigate just how effective prediction markets actually are. I'm generally skeptical of arguments in the mold of "My pet project x isn't being implemented due to the influence of shadowy interest group y."

As someone unfamiliar with the field, are there any good studies on the effectiveness of PM?

There's nothing shadowy about the claim that CEO's like to be able to decide on the strategy of their company and don't like the idea of giving up that power by delegating it to a prediction market. 

To measure how effective it is for companies to let their strategy decision be guided by prediciton markets you would need some companies to do that. We don't live in a world where that's the case. 

I'm not sure "conspiracy" is appropriate here. The existing Powers That Be (both political and corporate) have individual and collective interests in maintaining their current conditions. That they might each and all act to preserve the status quo (where they are powerful) probably does not actually require coordination of any kind, nor the secrecy that usually accompanies the term "conspiracy". I expect that no matter how effective prediction markets are, they will generally lack the necessary slack to dominate the existing systems.

Similarly, the first past the post system used in the United States gives rise to the spoiler effect, which penalizes third parties by increasing the odds that that

?

Debates are costly in terms of time and effort and might demonstrate your position to be untenable, better not to risk it.

I'm not clear what this is responding to.

It's a sarcastic response to if the defense mechanisms for the equilibrium of leaving these things unexamined were a person you could talk to.

I think a lot about this difficult, ill-defined (in the sense that people conceptualize the relative importance of the conditions that perpetuate senselessness, differently) problem, and I often times find myself coming back to ideas/hypotheses related to a) individual desire for power/authority (of various forms) that is appealing expressly because it can is higher than and can be imposed upon the 'lower' power of others, and/or b) individual desire for closure, certainty, cognitive fluency, and a reduction of cognitive dissonance. See This Article Won't Change Your Mind

With respect to the power piece, I think the dominant incentive structures of the times (and by this I don't just mean money/authority--incentives ranging from "feeling good about oneself" to "being seen as morally good by others" to "feeling epistemically superior", etc), as well as the normalization of self-absorption via social media, have been really counter-productive to intellectual honesty, intellectual humility, and co-productive discourse&deliberation...

It seems ironic that these power structures may have come about (initially) such that actual 'good work' was rewarded appropriately: those agents who were the means of production of that work probably never cared about receiving credit for it in the first place-- that rewarding recognition was merely a byproduct of an initial goal to do good work for the sake of good work itself and for the sake of one another. Over time, we see a system that conditions people and groups such that they can no longer distinguish between 'socioeconomic credits' and 'good work'. As a result, we see agents who avoid the production of 'good work' entirely, by creating the perception that they do 'good work' by signaling and amplifying their existing social/economic credit. This is then reinforced (the system can no longer distinguish between what is good work and what is being made to look like it is good work) leading to agents accruing more socioeconomic credits without a corresponding production of 'good work'. This mutually reinforcing dynamic, particularly in academic settings, may undermine the honest pursuit of knowledge. This is not conducive to individual/collective progress. Under circumstances where convergence with powerful others, status, money, security, etc become the primary drivers and outcomes of participation, rather than the process of honest work itself, it becomes hard to not engage in the rat race. If we take the claim that these incentives are reciprocal to self-absorption (you want to 'feel good about yourself', you want more money than other people, you want to feel like 'you help people', you want others to approve of you based on the public perception that 'you help people', you want a job title that society has deemed 'more valuable' than others in some way, etc) it follows then, that organizations that refrain from reifying individual reward could make space for the individual-individual attraction toward enacting power with one another in equal partnership. Hopefully, this would lead to the honest co-production of progress, inclusion, and harm reduction. For those who do not find the power-based incentive structure of the system to be especially appealing, or for those who feel uncomfortable with implicitly being 'given' more power than others, or for those who are attracted to the integrity of the process of work itself, it may be important to break from the feedback loop by generating autonomous cooperative interactions from which the co-creation of shared value (i.e systems reorganization and re-coupling) emerges (See Autopoeisis Wiki Reference for an analogue within the systems science framework. For more breadth and depth that is dense but worthwhile, see "From autopoiesis to neurophenomenology: Francisco Varela’s exploration of the biophysics of being") This emergence functions such that the meta-structure and meta-function of the entire system changes...and with any luck, it changes such that we see less suffering, less insanity, more connection, results.

Maybe pushing UBI could be one way to create that cushion that would be needed to allow people to voluntarily commit time and mental energy to strategic ideation and implementation. I also think that it would make it more likely that people remain principled, analytical, and honest in their jobs (being rational and ethical confers individual risk these days apparently) if working within a larger organization. Losing their job due to office politics wouldn't render them homeless/completely incapacitated, and it at least slightly lessens the intense dependence on (and therefore compliance with) one's potentially insane organizational ecosystem of employment.