At the Singularity Summit 2007, one of the speakers called for democratic, multinational development of artificial intelligence. So I stepped up to the microphone and asked:
Suppose that a group of democratic republics form a consortium to develop AI, and there’s a lot of politicking during the process—some interest groups have unusually large influence, others get shafted—in other words, the result looks just like the products of modern democracies. Alternatively, suppose a group of rebel nerds develops an AI in their basement, and instructs the AI to poll everyone in the world—dropping cellphones to anyone who doesn’t have them—and do whatever the majority says. Which of these do you think is more “democratic,” and would you feel safe with either?
I wanted to find out whether he believed in the pragmatic adequacy of the democratic political process, or if he believed in the moral rightness of voting. But the speaker replied:
The first scenario sounds like an editorial in Reason magazine, and the second sounds like a Hollywood movie plot.
Confused, I asked:
Then what kind of democratic process did you have in mind?
The speaker replied:
Something like the Human Genome Project—that was an internationally sponsored research project.
I asked:
How would different interest groups resolve their conflicts in a structure like the Human Genome Project?
And the speaker said:
I don’t know.
This exchange puts me in mind of a quote from some dictator or other, who was asked if he had any intentions to move his pet state toward democracy:
We believe we are already within a democratic system. Some factors are still missing, like the expression of the people’s will.
The substance of a democracy is the specific mechanism that resolves policy conflicts. If all groups had the same preferred policies, there would be no need for democracy—we would automatically cooperate. The resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an artificial intelligence, but it has to be something. What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind?
I think it means that you have said the word “democracy,” so the audience is supposed to cheer. It’s not so much a propositional statement or belief, as the equivalent of the “Applause” light that tells a studio audience when to clap.
This case is remarkable only in that I mistook the applause light for a policy suggestion, with subsequent embarrassment for all. Most applause lights are much more blatant, and can be detected by a simple reversal test. For example, suppose someone says:
We need to balance the risks and opportunities of AI.
If you reverse this statement, you get:
We shouldn’t balance the risks and opportunities of AI.
Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information.
There are plenty of legitimate reasons for uttering a sentence that would be uninformative in isolation. “We need to balance the risks and opportunities of AI” can introduce a discussion topic; it can emphasize the importance of a specific proposal for balancing; it can criticize an unbalanced proposal. Linking to a normal assertion can convey new information to a bounded rationalist—the link itself may not be obvious. But if no specifics follow, the sentence is probably an applause light.
I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:
I am here to propose to you today that we need to balance the risks and opportunities of advanced artificial intelligence. We should avoid the risks and, insofar as it is possible, realize the opportunities. We should not needlessly confront entirely unnecessary dangers. To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm. We should respect the interests of all parties with a stake in the Singularity. We must try to ensure that the benefits of advanced technologies accrue to as many individuals as possible, rather than being restricted to a few. We must try to avoid, as much as possible, violent conflicts using these technologies; and we must prevent massive destructive capability from falling into the hands of individuals. We should think through these issues before, not after, it is too late to do anything about them . . .
BTW, if anyone wants to go to singinst.org and download the audio, you'll note that the actual event did not occur the exact way I remembered it, which should surprise no one here who knows anything about human memory. In particular, Cascio spontaneously provided the Genome Project example, rather than needing to be asked for it.
Generally, the reason I avoid identifying the characters in my examples is that it feels to me like I'm dumping all the sins of humankind upon their undeserving heads - I'm presenting one error, out of context, as exemplar for all the errors of this kind that have ever been committed, and showing none of the good qualities of the speaker - it would be like caricaturing them, if I called them by name.
That said, the reason why I picked this example is that, in fact, I was thinking of Orwell's "Politics and the English Language" while writing this post. And as Orwell said:
In the case of a word like democracy, not only is there no agreed definition, but the attempt to make one is resisted from all sides. It is almost universally felt that when we call a country democratic we are praising it: consequently the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using that word if it were tied down to any one meaning.
If you simply issue a call for "democracy", why, no one can disagree with that - it would be like disagreeing with a call for apple pie. As soon as you propose a specific mechanism of democracy, whether it is Congress passing a law, or an AI polling people by phone, or government funding of a large research project whose final authority belongs to an appointed committee of eminent scientists, et cetera, people can disagree with that, because they can actually visualize the probable consequences.
So there is a tremendous motive to avoid criticism, to keep to the safely vague areas where people will applaud you, and not to make the concrete proposals where people might - gasp! - disagree.
Now I do not accuse you too much of this, because you did say "Genome Project" when challenged instead of squirting out an immense cloud of ink. But it is why I challenged you to define "democracy". I think that the real value in these discussions comes from people willing to make concrete proposals and expose themselves to criticism.
Really bad example...
My impression is that democracy is seeing a sharp uptick in attacks from elites and intellectuals. There are many who now believe, e.g., that the US should be more like China (see: the success of Trump).
As the speaker noted, he expected his speech to be controversial in that crowd, and in a way, it was, as evidenced by this blog post :)