I think it's wrong to call this a criticism of the rationality community. The people who designed systems like market auctions for spectrum aren't member of the rationalist community.
When I'm thinking about an institution in our movement that advocates knowledge gathering through formal methods without attempts at openness no one come to mind.
You could call GiveWell an organisation that uses formal methods for knowledge gathering but they are also an institution that releases recordings of their board meetings to the public. Actions like that are a costly signal for valuing legibility.
CFAR doesn't teach people to reason with formal systems. That's not what they teach.
Yeah. The econ part wasn't so bad - I lived through the shock therapy of 90s Russia, and Glen is spot on when he blames it on unaccountable technocratic governance. But when it comes to AI alignment, it seems like he hasn't heard of corrigibility and interpretability work by MIRI, FHI and OpenAI.
radicalxchange.org is not working in Google Chrome for me. I assumed it was because of one of my many Chrome extensions, but maybe it's an issue with the site itself? Works in Firefox/Opera.
I've tweeted at them twice about this problem. Not sure how else to contact them to get it fixed :/
Economist Glen Weyl has written a long essay, "Why I Am Not A Technocrat", a major focus of which is his differences with the rationalist community.
I feel like I've read a decent number of outsider critiques of the rationalist community at this point, and Glen's critique is pretty good. It has the typical outsider critique weakness of not being fully familiar with the subject of its criticism, balanced by the strength of seeing the rationalist community from a perspective we're less familiar with.
As I was reading Glen's essay, I took some quick notes. Afterwards I turned them into this post.
Glen's Strongest Points
So far, this sounds a lot like discussions I've seen previously of the book Seeing Like a State. But here's where Glen goes further:
...
...
...
...
...
...
...
(Please let me know if you think I left out something critical)
A famous quote about open source software development states that "given enough eyeballs, all bugs are shallow". Nowadays, with critical security bugs in open-source software like Heartbleed, the spirit of this claim isn't taken for granted anymore. One Hacker News user writes: "[De facto eyeball shortage] becomes even more dire when you look at code no one wants to touch. Like TLS. There were the Heartbleed and goto fail bugs which existed for, IIRC, a few years before they were discovered. Not surprising, because TLS code is generally some of the worst code on the planet to stare at all day."
In other words, if you want critical feedback on your open source project, it's not enough just to put it out there and have lots of users. You also want to make the source code as accessible as possible--and this may mean compromising on other aspects of the design.
Academic or other in-group status games may encourage the use of big words. But we'd be better off rewarding simple explanations--not only are simple explanations more accessible, they also demonstrate deeper understanding. If we appreciated simplicity properly:
We'd incentivize the creation of more simple explanations, promoting accessibility. And people wouldn't dismiss simple explanations for being "too obvious".
Intellectuals would realize that even if a simple idea required lots of effort to discover, it need not require lots of effort to grasp. Verification is much quicker than search.
At the very least, I think, Glen wants our institutions to be like highly usable software: The internals require expertise to create and understand, but from a user's perspective, it "just works" and does what you expect.
Another point Glen makes well is that just because you are in the institution design business does not mean you're immune to incentives. The importance of self-skepticism regarding one's own incentives has been discussed before around here, but this recent post probably comes closes to Glen's position, that you really can't be trusted to monitor yourself.
Finally, Glen talks about the insularity of the rationalist community itself. I think this critique was true in the past. I haven't been interacting with the community in person as much over the past few years, so I hesitate to talk about the present, but I think he's plausibly right. I also think there may be an interesting counterargument that the rationalist community does a better job of integrating perspectives across multiple disciplines than your average academic department.
Possible Points of Disagreement
Although I think Glen would find some common ground with the recent post I linked, it's possible he would also find points of disagreement. In particular, habryka writes:
Common wisdom is that it's impossible to please everyone. And specialization of labor is a foundational principle of modern society. If I took my role as a member of "the public" seriously and tried to provide meaningful and fair accountability to everyone, I wouldn't have time to do anything else.
It's interesting that Glen talks up the value of "legibility", because from what I understand, Seeing Like a State emphasizes its disadvantages. Seeing Like a State discusses legibility in the eyes of state administrators, but Glen doesn't explain why we shouldn't expect similar failure modes when "the general public" is substituted for "state administration".
(It's possible that Glen doesn't mean "legibility" in the same sense the book does, and a different term like "institutional legibility" would pinpoint what he's getting at. But there's still the question of whether we should expect optimizing for "institutional legibility" to be risk-free, after having observed that "societal legibility" has downsides. Glen seems to interpret recent political events as a result of excess technocracy, but they could also be seen as a result of excess populism--a leader's charisma could be more "legible" to the public than their competence.)
Anyway, I assume Glen is aware of these issues and working to solve them. I'm no expert, but from what I've heard of RadicalxChange, it seems like a really cool project. I'll offer my own uninformed outsider's perspective on institution design, in the hope that the conceptual raw material will prove useful to him or others.
My Take on Institution Design
I think there's another model which does a decent job of explaining the data Glen provides:
Human systems are complicated.
Greed finds & exploits flaws in institutions, causing them to decay over time.
There are no silver bullets.
From the perspective of this model, Glen's emphasis on legibility could be seen as yet another purported silver bullet. However, I don't see a compelling reason for it to succeed where previous bullets failed. How, concretely, are random folks like me supposed to help address the corruption Glen identifies in the wireless spectrum allocation process? There seems to be a bit of a disconnect between Glen's description of the problem and his description of the solution. (Later Glen mentions the value of "humanities, continental philosophy, or humanistic social sciences"--I'd be interested to hear specific ideas from these areas, which aren't commonly known, that he thinks are quite important & relevant for institution design purposes.)
As a recent & related example, a decade or two ago many people were talking about how the Internet would revitalize & strengthen democracy; nowadays I'd guess most would agree that the Internet has failed as a silver bullet in this regard. (In fact, sometimes I get the impression this is the only thing we can all agree on!)
Anyway... What do I think we should we do?
All untested institution designs have flaws.
The challenge of institution design is to identify & fix flaws as cheaply as possible, ideally before the design goes into production.
Under this framework, it's not enough merely to have the approval of a large number of people. If these people have similar perspectives, their inability to identify flaws offers limited evidence about the overall robustness of the design.
Legibility is useful for flaw discovery in this framework, just as cleaner code could've been useful for surfacing flaws like Heartbleed. But there are other strategies available too, like offering bug bounties for the best available critiques.
Experiments and field trials are a bit more expensive, but it's critical to actually try things out, and resolve disagreements among bug bounty participants. Then there's the "resume-building" stage of trialing one's institution on an increasingly large scale in the real world. I'd argue one should aim to have all the kinks worked out before "resume-building" starts, but of course, it's important to monitor the roll-out for problems which might emerge--and ideally, the institution should itself have means with which it can be patched "in production" (which should get tested during experimentation & field trials).
The process I just described could itself be seen as an untested institution which is probably flawed and needs critiques, experiments, and field testing. (For example, bug bounties don't do anything on their own for legibility--how can we incentivize the production of clear explanations of the institution design in need of critiques?) Taking everything meta, and designing an institutional framework for introducing new institutions, is the real silver bullet if you ask me :-)
Probable Points of Disagreement
Given Glen's belief in the difficulty of knowledge creation, the importance of local knowledge, and the limitations of outside perspectives, I hope he won't be upset to learn that I think he got a few things wrong about the rationalist community. (I also think he got some things wrong about the EA community, but I believe he's working to fix those issues, so I won't address them.)
Glen writes:
This doesn't appear to be a difference of opinion with the rationalist community. In Eliezer's CEV paper, he writes about the "coherent extrapolated volition of humankind", not the "coherent extrapolated volition of the rationalist community".
However, now that MIRI's research is non-disclosed by default, I wonder if it would be wise for them to publicly state that their research is for the benefit of all, in a charter like OpenAI has, rather than in a paper published in 2004.
Glen writes:
An unaligned superintelligent AI which can build advanced nanotechnology has no need to follow human laws. On the flip side, an aligned superintelligent AI can design better institutions for aggregating our knowledge & preferences than any human could.
Glen writes:
This actually appears to me to be one of the primary goals of AI alignment research. See 2.3 in this paper or this parable. It's not alien to mainstream AI research either: see research on explainability and interpretability (pro tip: interpretability is better).
In any case, if the alignment problem is actually solved, legibility isn't needed, because we know exactly what the system's goals are: The goals we gave it.
Conclusion
As I said previously, I have not investigated RadicalxChange in very much depth, but my superficial impression is that it is really cool. I think it could be an extremely high leverage project in a world where AGI doesn't come for a while, or gets invented slowly over time. My personal focus is on scenarios where AGI is invented relatively rapidly relatively soon, but sometimes I wonder whether I should focus on the kind of work Glen does. In any case, I am rooting for him, and I hope his movement does an astonishing job of inventing and popularizing nearly flawless institution designs.