Storable Votes with a Pay as you win mechanism: a contribution for institutional design
I joined the EA Forum in 2022, with a post describing my interests and agenda. I also declared in my first comment that in my view, among the main existential risk bottlenecks for this Dangerous Century, a critical one is institutional stagnation. E.O Wilson famously said: "The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology". Regarding the Paleolithic emotions, and godlike technology I have nothing to contribute, but regarding the medieval institutions I think I can make some modest contributions. Here are two of them, very likely my most important scientific contributions so far: the first is an already published journal article, the second, a new pre-print (please, feel free to make suggestions for improvement). Storable Votes with a Pay as You Win mechanism This article (“Storable Votes with a Pay as You Win mechanism” [Journal of Economic Interaction and Coordination, pre-print here for access after the expiry of ShareLink]) presents a dynamic voting mechanism on multiple alternatives (Storable Votes-Pay as You Win [SV-PAYW]). At the beginning, all agents are given an equal number of (infinitely divisible) storable votes. The agents say how many votes they are willing “to pay” for each of the possible alternatives and the most voted alternative wins the election. Then, the votes that have been committed to the winning alternative are deducted from each player's account, and are equally redistributed among all participants, and a new voting period begins. The system reduces the incentives for strategic voting: agents do not stop signaling their interest in alternatives with little probability of victory (if it does not win, you do not pay votes), and it solves the problem of minority disenfranchisement: the more elections a subject loses, the more power future electoral power she accumulated. The article uses exact computational methods (GAMBIT is used for backward induction). T
I am very happy this is now mainstream:
https://forum.effectivealtruism.org/posts/6j6qgNa3uGmzJEMoN/artificial-intelligence-as-exit-strategy-from-the-age-of
The risks that AGI implies for Humanity are serious, but they should not be assessed without considering that it is the most promising path out of the age of acute existential risk. Those who support a ban of this technology shall at least propose their own alternative exit strategy.
https://forum.effectivealtruism.org/posts/uHeeE5d96TKowTzjA/world-and-mind-in-artificial-intelligence-arguments-against
Bostrom is the absolute best.