This is a tricky problem. The first-order answer seems to be 'have the right people in power', but that's not an actionable strategy. However, it's amazing what a difference just one or two people can make - apparently a major reason the UK didn't delay its lockdown even further and risk ending up like the US is just because of Dominic Cummings.
The two main angles are either making the marketplace of ideas / electoral system select for foresight and sanity more effectively or building institutions with specific remits that can stand aside from such pressures and make the right choices anyway. The first is really hard and the second is really dangerous. However, neither are impossible.
For the first, there's ordinary electoral reform. An interesting alternative was given in Against Democracy by Jason Brennan - he proposes a new form of epistocracy to better reach higher-quality decisions - you can judge his scheme for yourself.
For the second, building competent independent institutions and then handing off power, the track record is pretty mixed. Independent central banks come to mind as a good example, the recent horrible Coronavirus debacle with the CDC, FDA or Public Health England as an especially bad example. For how to do that sort of thing correctly, you might also want to look at all the things Dominic Cummings has proposed, starting with e.g. this, or this article on Westminster dysfunction. He likes prediction markets, but not exclusively - he talks about building decentralised institutions that can operate with a large degree of independence.
On the specific angle of being more sane with respect to X-risks, I tend to favour the second approach (independent institutions) because I think it likely has a bigger effect and is easier to pull off than raising the society-wide sanity waterline. Toby Ord spoke a lot about this in 'The Precipice'. As for why, here's Scott Alexander:
Average national IQ correlates well with GDP per capita and other measures of development. But is average national IQ really the right number to look at? “Smart fraction theory” suggests we should instead look at the range of top IQs, since the smartest people are most likely to drive national growth by inventing things or starting businesses or governing well. Now Heiner Rindermann and James Thompson (names you may recognize!) have given the hypothesis its most complete test so far, and found that yes, IQ at the 95th percentile correlates better with national development than at the 50th percentile. But I am a little skeptical of their results...
Having elite opinion be non-crazy matters a lot in situations like the one we're in right now. Don't make 'we need to improve public discourse' your plan A for avoiding this level of chaos. So as suggested here, we should hand off more and more stuff to expert boards with limited remits, follow the example of independent Central Banks which didn't turn into French-revolution style rationalist tyranny over the masses - starting with everything to do with catastrophic risks. Someone in the UK government apparently took that suggestion seriously. Just don't get Steven Pinker involved.
In writing this answer I somehow completely forgot to mention Garrett Jones' new book 10% Less Democracy, which essentially goes over every idea listed above along with many others!
One idea I was thinking about over the last few days: academic hoaxes have been used many times over the past few decades to reveal shoddy standards in journals/subfields. The Sokal affair is probably the most famous, but there's a whole list of others linked on its wikipedia page. Thing is, that sort of hoax always took a fair bit of effort - writing bullshit which sounds good isn't trivial! So, as a method for policing scientific rigor, it was hard to scale up without a lot of resources.
But now we have GPT2/3, which potentially changes the math dramatically.
I'd guess that a single small team - possibly even a single person - could generate and submit hundreds or even thousands of bullshit papers, in parallel. That sort of sustained pressure would potentially change journals' incentives in a way which the occasional sting doesn't. There'd probably be an arms race for a little while - journals/reviewers coming up with cheap ways to avoid proper checks, bullshit-generators coming up with ways around those defenses - but I think there's a decent chance that the end result would be proper rigor in reviews.
This would just greatly increase the amount of credentialism in academia.
I.e., unless you're affiliated with some highly elite institution or renowned scholar, no one's even gonna look at your paper.
This tries to solve the problem of 'bad papers getting published', but doesn't seem to touch 'good papers not getting published'.
Eliezer had a lot of interesting ideas in My April Fools Day Confession, where he talked about a fictional society called Dath Ilan.
My recollection of that piece was it was mostly about the fruits of a saner society. In terms of how to get there, the intervention was "have built a systematic science of rationality, 200 years ago."
Which is a fine plan, on the time scale of 200 years. But are there interventions to deploy in the meantime?
Electoral reform: The proponents of Random Sample Voting make it sound pretty cool. Appendix 1 in this white paper gives an efficient summary: https://rsvoting.org/whitepaper/white_paper.pdf
Kickstartery things: Dominant Assurance Contracts (DACs) are similar to regular assurance contracts (including Kickstarter campaigns), except with tweaked incentives that attract pledges from otherwise indifferent parties. For explanation and discussion, I recommend these links: https://www.cato-unbound.org/2017/06/07/alex-tabarrok/making-markets-work-better-dominant-assurance-contracts-some-other-helpful http://jessic.at/writing/dac.pdf
Other: Vitalik Buterin wrote, "Conditional payments for paywalled content--after you pay for a piece of downloadable content and view it, you can decide after the fact if payments should go to the author or to proportionately refund previous readers". He also sketched out a mechanism by which mail recipients can price spammers out of their attention: https://ethresear.ch/t/conditional-proof-of-stake-hashcash/1301 I like these two ideas because they directly help individuals economize their own attention, even if they aren't exactly civilizational sanity interventions in the way you're talking about.
I like Buterin's conditional payments proposal. Ensures a reasonable net price for content, proportional to quality of the content, and it allows for punishing clickbait, while removing personal incentive to cheat good producers out of deserved rewards.
It would especially be useful to help alleviate the refund controversy that's been going on with videogames
Regarding DACs: I think a sponsor of an initiative implementing DACs serves as an antisignal for confidence in the project's potential for success, thereby indicating lack of confidence in the proposal's compellingness, plus (theoretically at least) a project which would succeed with DACs would be highly likely to be crowdfunded anyways, and combine that with a risk of having a vague resemblance to lay people with Ponzi schemes, and it may explain the current lack of popularity of DACs, despite having been known and easily feasible for >20 years
I guess ...
My understanding is that part of the reason our government is apparently so dysfunctional is that the electoral system is biased toward polarization.
While I think better voting systems would be better (score voting or approval voting seem like clear improvement over the status quo), the electoral system has been this way for a long time, but polarization has increased dramatically recently. That suggests to me it's not downstream of the voting system, and simple fixes to the voting system won't solve it.
Note also that politicians will strategically choose to be less polarizing, if being less polarizing is the recipe for electoral success. (Or less-polarizing politicians will be the ones who succeed and become prominent contributors to national conversation.) And people take cues from politicians, they don't just elect politicians who agree with their fixed opinions. So anyway, I guess I'm saying, there isn't a clean upstream / downstream flow, I think...
I think you're probably right, but I'm also not sure how much can can infer from the analysis as stated. Maybe you need both First Past the Post and Facebook for things to get this bad, and fixing only one of those things is sufficient.
I guess one way to check would be to compare to other countries with better electoral systems. Are they suffering from the same extreme Left-Right polarization as the US?
Expertise measurement via credence calibration. I wrote Prediction-based-Medicine to layout the concept for medicine.
It's also applicable to a variety of other professionals who make a lot of decisions that have clear measured outcomes. If you for example look at the people filing parole boards you can let them predict recivism rates.
Government burocrats who predict how variables will be in the future can be scored on credence.
Robin Hanson posits that the reason why there isn’t wider adoption of prediction markets is because they are a threat to the authority of existing executives.
Before we reach for conspiracies, maybe we should investigate just how effective prediction markets actually are. I'm generally skeptical of arguments in the mold of "My pet project x isn't being implemented due to the influence of shadowy interest group y."
As someone unfamiliar with the field, are there any good studies on the effectiveness of PM?
There's nothing shadowy about the claim that CEO's like to be able to decide on the strategy of their company and don't like the idea of giving up that power by delegating it to a prediction market.
To measure how effective it is for companies to let their strategy decision be guided by prediciton markets you would need some companies to do that. We don't live in a world where that's the case.
I'm not sure "conspiracy" is appropriate here. The existing Powers That Be (both political and corporate) have individual and collective interests in maintaining their current conditions. That they might each and all act to preserve the status quo (where they are powerful) probably does not actually require coordination of any kind, nor the secrecy that usually accompanies the term "conspiracy". I expect that no matter how effective prediction markets are, they will generally lack the necessary slack to dominate the existing systems.
Similarly, the first past the post system used in the United States gives rise to the spoiler effect, which penalizes third parties by increasing the odds that that
?
Debates are costly in terms of time and effort and might demonstrate your position to be untenable, better not to risk it.
I think a lot about this difficult, ill-defined (in the sense that people conceptualize the relative importance of the conditions that perpetuate senselessness, differently) problem, and I often times find myself coming back to ideas/hypotheses related to a) individual desire for power/authority (of various forms) that is appealing expressly because it can is higher than and can be imposed upon the 'lower' power of others, and/or b) individual desire for closure, certainty, cognitive fluency, and a reduction of cognitive dissonance. See This Article Won't Change Your Mind
With respect to the power piece, I think the dominant incentive structures of the times (and by this I don't just mean money/authority--incentives ranging from "feeling good about oneself" to "being seen as morally good by others" to "feeling epistemically superior", etc), as well as the normalization of self-absorption via social media, have been really counter-productive to intellectual honesty, intellectual humility, and co-productive discourse&deliberation...
It seems ironic that these power structures may have come about (initially) such that actual 'good work' was rewarded appropriately: those agents who were the means of production of that work probably never cared about receiving credit for it in the first place-- that rewarding recognition was merely a byproduct of an initial goal to do good work for the sake of good work itself and for the sake of one another. Over time, we see a system that conditions people and groups such that they can no longer distinguish between 'socioeconomic credits' and 'good work'. As a result, we see agents who avoid the production of 'good work' entirely, by creating the perception that they do 'good work' by signaling and amplifying their existing social/economic credit. This is then reinforced (the system can no longer distinguish between what is good work and what is being made to look like it is good work) leading to agents accruing more socioeconomic credits without a corresponding production of 'good work'. This mutually reinforcing dynamic, particularly in academic settings, may undermine the honest pursuit of knowledge. This is not conducive to individual/collective progress. Under circumstances where convergence with powerful others, status, money, security, etc become the primary drivers and outcomes of participation, rather than the process of honest work itself, it becomes hard to not engage in the rat race. If we take the claim that these incentives are reciprocal to self-absorption (you want to 'feel good about yourself', you want more money than other people, you want to feel like 'you help people', you want others to approve of you based on the public perception that 'you help people', you want a job title that society has deemed 'more valuable' than others in some way, etc) it follows then, that organizations that refrain from reifying individual reward could make space for the individual-individual attraction toward enacting power with one another in equal partnership. Hopefully, this would lead to the honest co-production of progress, inclusion, and harm reduction. For those who do not find the power-based incentive structure of the system to be especially appealing, or for those who feel uncomfortable with implicitly being 'given' more power than others, or for those who are attracted to the integrity of the process of work itself, it may be important to break from the feedback loop by generating autonomous cooperative interactions from which the co-creation of shared value (i.e systems reorganization and re-coupling) emerges (See Autopoeisis Wiki Reference for an analogue within the systems science framework. For more breadth and depth that is dense but worthwhile, see "From autopoiesis to neurophenomenology: Francisco Varela’s exploration of the biophysics of being") This emergence functions such that the meta-structure and meta-function of the entire system changes...and with any luck, it changes such that we see less suffering, less insanity, more connection, results.
Maybe pushing UBI could be one way to create that cushion that would be needed to allow people to voluntarily commit time and mental energy to strategic ideation and implementation. I also think that it would make it more likely that people remain principled, analytical, and honest in their jobs (being rational and ethical confers individual risk these days apparently) if working within a larger organization. Losing their job due to office politics wouldn't render them homeless/completely incapacitated, and it at least slightly lessens the intense dependence on (and therefore compliance with) one's potentially insane organizational ecosystem of employment.
Lately, I've been thinking about the class of things that I'm calling "Civilizational Sanity Interventions." With that term I'm meaning to refer to technologies, institutions, projects, or norms that, if implemented, would improve the quality of high level decision making about important issues.
Which things if they existed in the world, would make our society, collectively, saner?
Some examples (with which I expect most people around here to be familiar):
Prediction markets
Prediction markets are a clever way to aggregate all the available information to make accurate predictions.
Robin Hanson posits that the reason why there isn't wider adoption of prediction markets is because they are a threat to the authority of existing executives.
If we lived in a world where the use of prediction markets were commonplace standard practice, eventually, decision makers would face flack for acting against the predictions of the market, and pundits would have a lot less leeway to make inaccurate, politically-motivated predictions.
Hanson, in a recent interview,
So pushing from this equilibrium, to the one where prediction markets are common, would improve our societies beliefs about just about everything that one could make a prediction market for.
Arbital (or something like it)
The pitch I heard for arbital went something like this...
[Please note that I am recalling conversations that I had back in 2016. This should not be taken as an authoritative summary of Arbital's vision or plans.]
Now of course, setting this as your goal is one thing, and actually designing a mechanism that is able to do this is another. And Arbital did not, in fact, succeed. But if something like this could be made to work, that would be a substantial boon to high level decision making.
In deed, even just educational tools that make it much easier to understand complicated topics might be a major help, under the (possible?) model that part of the reason why politicians and other high-level decision makers produces far from optimal policy, is that it is too hard, or too time consuming, to make sense of the conflicting arguments about, say, economics.
Electoral Reform
My understanding is that part of the reason our government is apparently so dysfunctional is that the electoral system is biased toward polarization.
A case in point is gerymanderying, whereby districts are drawn in such a way that congressmen are all but guarantied to win general elections, which disenfranchises voters, and polarizes both parties (because in order to keep your job, you only need to appeal to your base, not cater to citizens across the political spectrum).
Similarly, the first past the post system used in the United States gives rise to the spoiler effect, which penalizes third parties by increasing the odds that their least preferred candidate wins.
It seems like solving those underlying incentives problems would moderate law makers, which seems likely to produce saner outcomes.
Kick-starter / Free state project style platforms
Kickstarter is a solution to a class of collective action problems, funding the creation of products that many people would want, but no one person can afford to pay the upfront startup costs for.
It seems like there is a lot of room for collective action solutions like that to shine.
For instance, many scientists know that the statistical methods that they use are less than ideal, but it would be costly for their personal careers if they switched to better methods, while everyone else continued to use the old ones. To solve this, young grad students might all commit to abandon using p-values, so long as x% of their peers agree to do the same.
I want to collect as many ideas for Civilizational Sanity Interventions as I can. Does anyone else have other examples?