Hey Steven, I'll answer your question/suggestion below. One upfront request: please let us know if this helps. We'll write a follow-up post on LW explaining this.
As mentioned in the appendix, most of what we wrote up is generalized from concrete people (not made-up, my IRL company Digital Gaia) trying to build a specific concrete AI thing (software to help farmers and leaders of regeneration projects maximize their positive environmental impact and generate more revenue by being able to transparently validate their impact to donors or carbon credit buyers). We talked extensively to people in the ag, climate and nature industries, and came to the conclusion that the lack of transparent, unbiased impact measurement and validation -- ie, exactly the transaction costs you mention -- is the reason why humanity is massively underinvested in conservation and regeneration. There are gazillions of "climate AI" solutions that purport to measure and validate impact, but they are all fundamentally closed and centralized, and thus can't eliminate those transaction costs. In simple terms, none of the available systems, no matter how much money they spent on data or compute, can give a trustworthy, verifiable, privacy-preserving rationale for either scientific parameters ("why did you assume the soil carbon captured this year in this hectare was X tons?") or counterfactuals ("why did you recommend planting soybeans with an alfalfa rotation instead of a maize monoculture?"). We built the specific affordances that we did -- enabling local decision-support systems to connect to each other forming a distributed hierarchical causal model that can perform federated partial pooling -- as a solution to exactly that problem:
We validated the first two steps of this theory in a pilot; it worked so well that our pilot users keep ringing us back saying they need us to turn it into production-ready software...
Disclaimer: We did not fully implement or validate two important pieces of the architecture that are alluded to in the post: free energy-based economics and trust models. These are not crucial for a small-scale, controlled pilot, but would be relevant for use at scale in the wild.
people won’t want to prioritise informationally best comments, and that their main motivation for reading comments is confirming their pre-existing worldviews. This is sort of what is customary to expect, but leaning into my optimism bias, I should plan as if this is not the case. (Otherwise, aren’t we all doomed, anyway?)
There are countermoves to this. Preferences and behaviors are malleable. There can be incentives for adopting BetterDiscourse (potentially through public good funding), peer pressure, etc.
I think this should very valuable already in this completely local regime, however, things may get even more interesting, and to recapitulate the “collaborative filtration power” of Community Notes, Pol.is, and Viewpoints.xyz, (active) users’ feedbacks are aggregated to bubble up the best comments up for new users, or for users who choose not to vote actively to tune their predictive model well. Furthermore, when users with a similar state-space already voted positively for comments that their models didn’t predict then such comments could be shown earlier to other users in the same state-space cluster, overriding the predictions of their models.
I used to think this wouldn't reach a critical mass of high-quality active users, but I've started warming up to this idea. Just yesterday I was talking to some friends who basically described how they pack-hunted to debunk right-wing political commentary on Instagram and news site. And these are Brazilian diaspora normies in their 40s, highly educated but not the highly motivated teenage nerd persona that I would normally envision as an active contributor in this kind of thing. So I think if we find a way to help people like this, who already see collaborative moderation as an important public duty, by increasing and making more visible the material impact from their contributions, we can achieve critical mass and at least initially overcome the deluge of noise that characterizes online commentary.
Maybe just point to the relevant paper? https://arxiv.org/abs/2312.00752
Excellent post, a great starting point, but we must go deeper :) For instance:
@Épiphanie Gédéon this is great, very complementary/related to what we've been developing for the Gaia Network. I'm particularly thrilled to see the focus on simplicity and incrementalism, as well as the willingness to roll up one's sleeves and write code (often sorely lacking in LW). And I'm glad that you are taking the map/territory problem seriously; I wholeheartedly agree with the following: "Most safe-by-design approaches seem to rely heavily on formal proofs. While formal proofs offer hard guarantees, they are often unreliable because their model of reality needs to be extremely close to reality itself and very detailed to provide assurance."
A few additional thoughts:
I'd be keen to find ways to collaborate.
Also @Roman Leventov FYI