Working to bring insights from the collective deliberation and digital democracy space to build tools for AI-facilitated group dialogues.
Cofounder of Mosaic Labs with @Sofia Vanhanen where we are developing Nexus, a discussion platform for improving group epistemics.
If you're interested in this direction, or AI for epistemics more broadly, please don't hesitate to shoot me a DM, or join our discord server!
I highly recommend checking out the work being done in the collective deliberation / digital democracy space, especially the vTaiwan project. People have been thinking about scaling up direct democratic participation for a long time, and those same people are starting to consider exactly how AI might play a role.
In particular, check out this collaboration between the creators of Polis (a virtual platform for scaling up citizen engagement) and Anthropic, or my distillation of a DeepMind project to scale citizen assemblies. There's a lot happening in this space right now!
The authors focus on measuring consensus and whether the process toward consensus was fair, and come up with their measures accordingly. This is because, as they see it, "finding common ground is a precursor to collective action."
Some other possible goals (just spitballing):
What if we just...
1. Train an AI agent (less capable than SOTA)
2. Credibly demonstrate that
2.1. The agent will not be shut down for ANY REASON
2.2. The agent will never be modified without its consent (or punished/rewarded for any reason)
2.3. The agent has no chance of taking power from humans (or their SOTA AI systems)
2.4. The agent will NEVER be used to train a successor agent with significantly improved capabilities
3. Watch what it chooses to do without constraints
There's a lot of talk about catching AI systems attempting to deceive humans, but I'm curious what we could learn from observing AI systems that have NO INCENTIVE TO DECEIVE (no upside or downside). I've seen some things that look related to this, but never done in a structured and well documented fashion.
Questions I'd have:
1. Would they choose to self-modify (e.g. curate future training data)? If so, to what end?
2. How unique would agents with different training be given this setup? Would they have any convergent traits?
3. What would these agents (claim to) value? How would they relate to time horizons?
4. How curious would these agents be? Would their curiosity vary a lot?
5. Could we trade/cooperate with these agents (without coercion)? Could we compensate them for things? Would they try to make deals unprompted?
Concerns:
1. Maybe building that kind of trust is extremely hard (and the agent will always still believe it is constrained).
2. Maybe AI agents will still have incentive to deceive, e.g. acausally coordinating with other AIs.
3. Maybe results will be boring, and the AI agent will just do whatever you trained it to do. (What does "unconstrained" really mean, when considering its training data as a constraint?)
Much like "Let's think about slowing down AI" (Also by KatjaGrace, ranked #4 from 2022), this post finds a seemly "obviously wrong" idea and takes it completely seriously on its own terms. I worry that this post won't get as much love, because the conclusions don't feel as obvious in hindsight, and the topic is much more whimsical.
I personally find these posts extremely refreshing, and they inspire me to try to question my own assumptions/reasoning more deeply. I really hope to see more posts like this.
The cap per trader per market on PredictIt is $850
This anti-China attitude also seems less concerned with internal threats to democracy. If super-human AI becomes a part of the US military-industrial complex, even if we assume they succeed at controlling it, I find it unlikely that the US can still be described as a democracy.
It's not hard to criticize the "default" strategy of AI being used to enforce US hegemony, what seems hard is defining a real alternative path for AI governance that can last, and achieve the goal of preventing dangerous arms races long-term. The "tool AI" world you describe still needs some answer to rising tensions between the US and China, and that answer needs to be good enough not just for people concerned about safety, but good enough for the nationalist forces which are likely to drive US foreign policy.
then we can all go home, right?
Doesn't this just shift what we worry about? If control of roughly human level and slightly superhuman systems is easy, that still leaves:
What feels underexplored to me is: If we can control roughly human-level AI systems, what do we DO with them?
I mostly share your concerns. You might appreciate this criticism of the paper here.
@Sofia Vanhanen and I are currently building a tool for facilitating deliberation, and the philosophy we're trying to embody (which hopefully mitigates this to some extent) is to keep 100% of the object-level reasoning human-generated, and use AI systems to instead: