Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?
It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.
Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?
It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.
Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can't really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.
Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.
I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don't act before X, the world blows up - obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.
Of course you can cease argument without consideration - if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can't afford to weigh whether the idea would result in a worse situation somehow - beyond whatever assessment you have already made. At that point, the time for assessment is over.
For the most part, however, I agree with your point. I did not argu...
At the Singularity Summit 2007, one of the speakers called for democratic, multinational development of artificial intelligence. So I stepped up to the microphone and asked:
I wanted to find out whether he believed in the pragmatic adequacy of the democratic political process, or if he believed in the moral rightness of voting. But the speaker replied:
Confused, I asked:
The speaker replied:
I asked:
And the speaker said:
This exchange puts me in mind of a quote from some dictator or other, who was asked if he had any intentions to move his pet state toward democracy:
The substance of a democracy is the specific mechanism that resolves policy conflicts. If all groups had the same preferred policies, there would be no need for democracy—we would automatically cooperate. The resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an artificial intelligence, but it has to be something. What does it mean to call for a “democratic” solution if you don’t have a conflict-resolution mechanism in mind?
I think it means that you have said the word “democracy,” so the audience is supposed to cheer. It’s not so much a propositional statement or belief, as the equivalent of the “Applause” light that tells a studio audience when to clap.
This case is remarkable only in that I mistook the applause light for a policy suggestion, with subsequent embarrassment for all. Most applause lights are much more blatant, and can be detected by a simple reversal test. For example, suppose someone says:
If you reverse this statement, you get:
Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information.
There are plenty of legitimate reasons for uttering a sentence that would be uninformative in isolation. “We need to balance the risks and opportunities of AI” can introduce a discussion topic; it can emphasize the importance of a specific proposal for balancing; it can criticize an unbalanced proposal. Linking to a normal assertion can convey new information to a bounded rationalist—the link itself may not be obvious. But if no specifics follow, the sentence is probably an applause light.
I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing: