American democracy currently operates far below its theoretical ideal. An ideal democracy precisely captures and represents the nuanced collective desires of its constituents, synthesizing diverse individual preferences into coherent, actionable policy.
Today's system offers no direct path for citizens to express individual priorities. Instead, voters select candidates whose platforms only approximately match their views, guess at which governmental level—local, state, or federal—addresses their concerns, and ultimately rely on representatives who often imperfectly or inaccurately reflect voter intentions. As a result, issues affecting geographically dispersed groups—such as civil rights related to race, gender, or sexuality—are frequently overshadowed by localized interests. This distortion produces presidential candidates more closely aligned with each other's socioeconomic profiles than with the median voter.
Traditionally, aggregating individual preferences required simplifying complex desires into binary candidate selections, due to cognitive and communicative limitations. Large Language Models (LLMs), however, introduce a radical alternative by processing detailed, nuanced expressions of individual views at unprecedented scales.
Instead of forcing preferences into narrow candidate choices, citizens could freely articulate their concerns and solutions in natural language. An LLM can rapidly integrate these numerous, detailed responses into a clear and unified "Collective Views" document. Previously, synthesizing a hundred individual perspectives might have required five person-hours; specialized LLMs can now accomplish this task in minutes. Parallel implementations could aggregate millions of voices within an hour, transforming a previously unimaginable task into routine practice.
Such rapidly generated collective statements create a powerful mechanism for accountability, making government responsiveness directly measurable against clearly articulated public preferences. Transparency naturally constrains representatives' ability to diverge unnoticed from voter priorities.
Moreover, LLM-generated collective views could directly shape legislative drafting, significantly reducing lobbyist influence and governmental inefficiency. Continuous, dynamic engagement through AI enables real-time policy-making aligned closely with public sentiment, redefining democratic responsiveness.
This is the first in a possible series of posts exploring practical AI solutions to realize democratic ideals at scale. Subsequent posts could cover:
- Aggregation: Prototyping AI systems that clearly synthesize individual views into a cohesive Collective Will statement.
- Accountability: Holding up a Collective Will next to actual government activity (e.g., budget allocation) to highlight discrepancies.
- Action: Outlining concrete strategies to translate a Collective Will into effective legislative outcomes.
I suppose they could, and from now on I'll consider this to be one of the many significant dangers of governance by AI.
Part of the problem with direct democracy is that it provides no unambiguous mechanism for leaders to exercise judgment about the relative importance of different individuals' preferences and concerns, or the quality of their information and reasoning. A great many of America's, and the world's, most important advances and accomplishments in governance have happened in spite of, not because of, public sentiment. As I see it, many of the failings of American democracy in recent decades are rooted in the fact that we now demand a level of transparency that precludes the kind of quiet, private dealmaking and negotiation that used to enable elected officials to handle matters where their constituents' claimed preferences are inconsistent, incoherent, misguided, or otherwise not good for the country as a whole.
Is the less transparent human alternative often dangerous and misguided too? Absolutely. Could a truly virtuous (possibly sovereign) AI do better than any set of humans at setting up governance structures that facilitate human flourishing like never before? Also yes. But there are many ways to get the aggregation mechanism even just slightly wrong that could then turn into catastrophic or unrecoverable mistakes. Right now, if anyone proclaims any intention to try to implement such a thing, I'm quite confident they have no idea how to do so in a way that avoids such dangers.
"If the platform is created, how do you get people to use it the way you would like them to? People have views on far more than the things someone else thinks should concern them."
>
If people are weighted equally, ie if the influence of each person's written ballot is equal and capped, then each person is incentivized to emphasize the things which actually affect them.
Anyone could express views on things which don't affect them, it'd just be unwise. When you're voting between candidates (as in status quo), those candidates attempt to educate and en... (read more)