Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Applause Lights

68 Post author: Eliezer_Yudkowsky 11 September 2007 06:31PM

Followup toSemantic Stopsigns, We Don't Really Want Your Participation

At the Singularity Summit 2007, one of the speakers called for democratic, multinational development of AI.  So I stepped up to the microphone and asked:

Suppose that a group of democratic republics form a consortium to develop AI, and there's a lot of politicking during the process—some interest groups have unusually large influence, others get shafted—in other words, the result looks just like the products of modern democracies.  Alternatively, suppose a group of rebel nerds develops an AI in their basement, and instructs the AI to poll everyone in the world—dropping cellphones to anyone who doesn't have them—and do whatever the majority says.  Which of these do you think is more "democratic", and would you feel safe with either?

I wanted to find out whether he believed in the pragmatic adequacy of the democratic political process, or if he believed in the moral rightness of voting.  But the speaker replied:

The first scenario sounds like an editorial in Reason magazine, and the second sounds like a Hollywood movie plot.

Confused, I asked:

Then what kind of democratic process did you have in mind?

The speaker replied:

Something like the Human Genome Project—that was an internationally sponsored research project.

I asked:

How would different interest groups resolve their conflicts in a structure like the Human Genome Project?

And the speaker said:

I don't know.

This exchange puts me in mind of a quote (which I failed to Google found by Jeff Grey and Miguel) from some dictator or other, who was asked if he had any intentions to move his pet state toward democracy:

We believe we are already within a democratic system.  Some factors are still missing, like the expression of the people's will.

The substance of a democracy is the specific mechanism that resolves policy conflicts.  If all groups had the same preferred policies, there would be no need for democracy—we would automatically cooperate.  The resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an AI, but it has to be something.  What does it mean to call for a "democratic" solution if you don't have a conflict-resolution mechanism in mind?

I think it means that you have said the word "democracy", so the audience is supposed to cheer.  It's not so much a propositional statement, as the equivalent of the "Applause" light that tells a studio audience when to clap.

This case is remarkable only in that I mistook the applause light for a policy suggestion, with subsequent embarrassment for all.  Most applause lights are much more blatant, and can be detected by a simple reversal test.  For example, suppose someone says:

We need to balance the risks and opportunities of AI.

If you reverse this statement, you get:

We shouldn't balance the risks and opportunities of AI.

Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information.  There are plenty of legitimate reasons for uttering a sentence that would be uninformative in isolation.  "We need to balance the risks and opportunities of AI" can introduce a discussion topic; it can emphasize the importance of a specific proposal for balancing; it can criticize an unbalanced proposal.  Linking to a normal assertion can convey new information to a bounded rationalist—the link itself may not be obvious.  But if no specifics follow, the sentence is probably an applause light.

I am tempted to give a talk sometime that consists of nothing but applause lights, and see how long it takes for the audience to start laughing:

I am here to propose to you today that we need to balance the risks and opportunities of advanced Artificial Intelligence.  We should avoid the risks and, insofar as it is possible, realize the opportunities.  We should not needlessly confront entirely unnecessary dangers.  To achieve these goals, we must plan wisely and rationally.  We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm.  We should respect the interests of all parties with a stake in the Singularity.  We must try to ensure that the benefits of advanced technologies accrue to as many individuals as possible, rather than being restricted to a few.  We must try to avoid, as much as possible, violent conflicts using these technologies; and we must prevent massive destructive capability from falling into the hands of individuals.  We should think through these issues before, not after, it is too late to do anything about them...

 

Part of the sequence Mysterious Answers to Mysterious Questions

Next post: "Truly Part Of You"

Previous post: "'Science' as Curiosity-Stopper"

Comments (75)

Sort By: Old
Comment author: Ray 11 September 2007 06:45:08PM 103 points [-]

You have, I think, come upon the essence of modern political speeches.

Comment author: David_J._Balan 11 September 2007 06:53:06PM 3 points [-]

The democracy booster probably meant that people with little political power should not be ignored. And that's not an empty statement; people with little political power are ignored all the time.

Comment author: Gray 15 March 2011 07:54:44PM 6 points [-]

Actually, that seems to be an extremely empty statement. "Having little political power" seems to imply, and is implied by, "being ignored". I wouldn't doubt that the two predicates are coextensive. Since people with little political power are, by definition, ignored; saying that people with little political power should not be ignored makes as much sense as saying that squares should be circular.

But maybe I'm not being very charitable here. You can make the shape that was once square more circular, only as long as you note that the shape isn't a square anymore. Similarly, people with little political power can, over time, gain more political power, which is a positive thing. But even if everyone has an equal amount of political power, the proposition that "people with little political power are ignored" would still be true, even if the predicates contain the null set.

Comment author: CuSithBell 15 March 2011 08:16:13PM *  4 points [-]

I disagree.

Even if your interpretation of these terms were accurate, "the elements of this set should (in the future) not be elements of this set" isn't an empty statement.

Second, a benevolent dictator (or, say, an FAI) could certainly advance the interests of a group with absolutely no say in what said dictator does.

Comment author: Gray 18 March 2011 03:13:26AM *  7 points [-]

Eeek, I think the differences in interpretations are due to the de re / de dicto distinction.

Compare the following translations of the statement "people without political power should not be ignored."

De dicto: "It should not be the case that any person without political power is also a person who is ignored."

De re: "If there is a person without political power, then that person should not be ignored."

If the two predicates in the de re interpretation ("person without political power" and "person who is ignored") are coextensive, and thus equivalent, we should be able to substitute like terms and derive "If there is a person without political power, then that person should not be without political power." Given that I wanted to use the more charitable interpretation, this is the interpretation I should use, and so you're correct :)

But look what happens to the de dicto interpretation when you substitute like terms. It turns into "It should not be the case that a person without political power is a person without political power." This is the sort of thing I was objecting to, to begin with. But it was the wrong interpretation, and thus my error.

(Yeah, I decided to go into an extensive analysis here mainly to refine my logic skills and in case anyone else is interested. Mathematicians, I suppose, would probably not have studied the de re / de dicto distinction; mainly because I don't see much relevance to mathematics.)

Comment author: CuSithBell 18 March 2011 04:03:33AM 0 points [-]

Huh! Thanks for the thorough analysis :) I'd say the most likely intent behind the statement is that people with direct political power should use it for the benefit of those without direct political power - i.e. elected officials and so forth should provide support for minority groups without much voting power. In which case your initial thought that they intended a "de dicto" reading could be right!

Did I tip my hand about being a mathematician by mentioning set theory? ;)

Comment author: Robin_Hanson2 11 September 2007 07:35:22PM 36 points [-]

Alas, for most audiences I think you would find no one laughing even after an entire applause light speech.

Comment author: patrissimo 07 December 2010 04:55:53AM 33 points [-]

Yeah, but you'd get lots of applause!

Comment author: Matt_Simpson 28 January 2013 09:51:17PM 3 points [-]

Evidence: any graduation speech I've ever been subject to.

Comment author: Chrysophylax 31 January 2013 05:30:14PM 1 point [-]

I tried this for my valedictoral speech and I gave up after about 15 seconds due to the laughter.

My preferred method is to use long sentences, to speak slowly and seriously, with great emphasis, and to wave my hands in small circles as I speak. If you don't speak to this audience regularly, it is also a good idea to emphasise how grateful you are to be asked to speak on such an important occasion (and it is a very important occasion...). You get bonus points for using the phrase "just so chuffed", especially if you use it repeatedly (a technique I learned from my old headmaster, who never expressed satisfaction in any other way while giving speeches).

I also recommend this technique, this way of speaking, to anyone who wishes to wind up, by which I mean annoy or irritate, a family member. It's quite effective when used consistently, even if you only do it for a minute or two. Don't you agree?

Comment author: Peter_de_Blanc 11 September 2007 09:12:26PM 9 points [-]

I remember at the AGIRI workshop in DC last year, Alexei Samsonovich talked about sorting a list of English words along two dimensions - "valence" and "arousal," indicating some component of the emotional response which words evoke.

Maybe audiences respond to speeches by summing the emotion vectors of each word in the speech, rather than parsing sentences.

Quick test: who here is excited by the prospects of anthropic quantum computing?

Comment author: Arthur1981 21 March 2010 08:28:18AM 15 points [-]

What I find interesting is that there are some obvious parallels between applause lights and Barnum statements - so named after P.T. Barnum.

Barnum statements are essentially statements which anyone can apply to themselves as true, which essentially say nothing, and which feel unique to each individual hearing themselves described that way.

Barnum statements are a stock-in-trade of cold-readers such as mentalists and psychics. It seems to me that applause lights are nothing more than the abstract, impersonal version of the same phenomena; or perhaps the same phenomena used in a rhetorical and ideological application.

Comment author: JohnWittle 05 September 2012 04:27:14PM 1 point [-]

anthropic quantum computing? if i were flipping through the channels and heard that phrase uttered by someone who looked like he was giving a speech, i would be immediate interested in learning more and would definitely stay on the channel. I have no idea what the phrase means, but my immediate guesses are indeed exciting.

Comment author: shminux 05 September 2012 04:55:11PM *  3 points [-]

anthropic quantum computing

I'd think that it came out of a random abstract generator like snarxiv.

Comment author: Vladimir_Nesov2 11 September 2007 09:28:10PM 2 points [-]

Such speech could theoretically perform "bringing to attention" function. Chunks of "bringing to attention" are equivalent to any kind of knowledge, it's just an inefficient form, and abnormality of that speech in its utter inefficiency, not lack of content. People can bear such talk as similar inefficiency can be present in other talks in different form. Inefficiency makes it much simpler to obfuscate eluding certain topics.

Comment author: michael_vassar3 11 September 2007 10:20:43PM 16 points [-]

I'm pretty sure that many people and organizations routinely DO argue that "we shouldn't balance the risks and opportunities of X". In ethics, deontological systems claim this. In policy, environmentalists are the first example that spring to mind, though they have been getting substantially better in the last few years. Radical pacifists like Gandhi have often been praised for asserting that people should not balance the risks and opportunities of war. More broadly, display of this attitude seems to me to be necessary for anyone who is attempting to portray that they are extraordinarily "virtuous" as virtue is normally understood, at least in our broadly Christian derived civilization. I actually think that it would be a good idea to try presenting all applause lights, but I think that it has been done. "The Gentle Art of Verbal Self Defense" claims in the appendix that such a speech has been written and presented to applause on a variety of topics. It seems to me though that the speech you were proposing above was actually an endorsement of a reasonable set of meta-policies which are in fact generally not engaged in, and was thus substantive, not empty, so I'm not sure it counts.

Comment author: Anders_Sandberg 11 September 2007 10:44:50PM 2 points [-]

David's comment that we shouldn't ignore people with little political power is a bit problematic. People who are not ignored in a political process have by definition some political power; whoever is ignored lacks power. So the meaning becomes "people who are ignored are ignored all the time". The only way to handle it is to never ignore anybody on anything. So please tell me your views on whether Solna muncipality in Sweden should spend more money on the stairs above the station, or a traffic light - otherwise the decision will not be fully democratic.

I wonder if the sensitivity for applause lights is different in different cultures. When I lectured in Madrid I found mine and several friend's speeches fall relatively flat, despite being our normally successful "standard speeches". But a few others got roaring responses at the applause lights - we were simply not turning them on brighly enough. The reward of a roaring applause is of course enough to bias a speaker to start pouring on more applause lights.

Hmm, was my use of "bias" above just an applause light for Overcoming Bias?

Comment author: bigjeff5 31 January 2011 04:20:35PM 0 points [-]

The reward of a roaring applause is of course enough to bias a speaker to start pouring on more applause lights. Hmm, was my use of "bias" above just an applause light for Overcoming Bias?

Perhaps a better word would be "train".

Comment author: Luke_G. 11 September 2007 11:01:51PM 17 points [-]

Eliezer's nothing-but-applause-lights speech sounds strangely like every State of the Union address I've ever heard...

Comment author: Arnold_Kling 11 September 2007 11:18:37PM 1 point [-]

See also Trust Cues.

Comment author: NickRetallack 27 June 2013 04:46:17AM 1 point [-]

When I click that link, my browser downloads a file called redirect.php.

Comment author: Michael_Rooney 12 September 2007 12:04:13AM 2 points [-]

Rather than just "applause lights", sloganeering often is a cue to group-identification. Cf. postmodern text generators.

Comment author: Cihan_Baran 12 September 2007 01:13:32AM 1 point [-]
Comment author: J_Thomas 12 September 2007 01:15:13AM 0 points [-]

"The democracy booster probably meant that people with little political power should not be ignored. And that's not an empty statement; people with little political power are ignored all the time."

But isn't it precisely the people with little political power who can most safely be ignored?

Comment author: bigjeff5 31 January 2011 04:26:12PM *  1 point [-]

In standard democracy, yes, that is the case.

Perfect democracy is pure majority rule. Through history we have learned that this is probably the worst possible idea for a form of government. The mob has no concern for those who are not in the mob, and the apathy of the crowd can lead to some horrific consequences for those in the minority.

This is why most democracies are not really democracies, but have strong constraints that boost the power of the weakest members to prevent them from being overruled on every decision, while still giving the majority the larger share of the power.

For example, in the US the democratic process is split between two houses, The House of Representatives, which is population based and represents majority rule, and The Senate, for which each state gets only two representatives regardless of population. That balances the power while still giving the majority the majority of the power.

It's constraints similar to this (everyone does it differently, the point is that you always need to do it) that allow democratically based systems to work. In the US we also put in a president to make sure things get done, and then went as far outside the democratic system as the founders were comfortable with to install the third constraint on the system - the courts.

It could work just fine if there were plenty of well thought out constraints on it, but "democracy" by itself probably would not work at all; it rarely ever does. Therefore, saying "democracy" without any intention of discussing it is clearly just an applause word. Either that, or the man was totally ignorant. Leave it to someone like that to require the absolute destruction of a major effort like AGI just to learn the pitfalls of democracy that have been learned over and over and over again.

Comment author: omeganaut 11 May 2011 08:08:18PM 0 points [-]

But that in now way implies that they should be ignored.

Comment author: thomblake 11 May 2011 10:05:32PM 0 points [-]

But that in now way implies that they should be ignored.

It at least to some extent implies that they should be ignored. To illustrate:

Someone who is has great political power should not be ignored. This statement is not vacuous; it is instead making a worthwhile statement of fact. Given that, we know that people who do not have great political power should be ignored to a greater extent than people who do have great political power. Thus, that one does not have great political power (at least weakly) implies that one should be ignored (ceteris paribus). This contradicts the claim "That in no way implies that they should be ignored" (emphasis added).

As a side note, the comment you're responding to was left in 2007, and even on a different website. As a general rule, unless you're making a significant contribution, it's not worth responding to comments that were left before 2009.

If you do believe the parent comment is a worthwhile contribution, I'd suggest correcting "now" to "no" (assuming that's what you meant).

Comment author: James_Bach 12 September 2007 03:50:24AM 1 point [-]

Curiously Eliezer, I feel like applauding. Good post.

Comment author: Shakespeare's_Fool 12 September 2007 04:12:13AM 0 points [-]

Eliezer,

Thank you for the quotation:

"We believe that we are already living in a democracy, although some factors are still missing, such as the expression of the people's will"

I hope someone can tell us who said it.

John

Comment author: Patri_Friedman 12 September 2007 06:12:21AM 1 point [-]

It might not convey information, but I bet you could get thunderous applause. Often, the latter outweighs the former when it comes t the goals of a speech.

Comment author: jeff_gray2 12 September 2007 07:29:11AM 1 point [-]

link to 1981 Time magazine interview with the president of Argentina - source of Eliezer's quote about democracy absent the people's will.

http://www.time.com/time/magazine/article/0,9171,954853,00.html?promoid=googlep

Comment author: Leonard 12 September 2007 07:14:42PM 3 points [-]

The substance of a democracy is the specific mechanism that resolves policy conflicts. ... What does it mean to call for a "democratic" solution if you don't have a conflict-resolution mechanism in mind?

I think that for many people the "substance" of democracy is not the specific mechanism, but rather the general mechanism, and the nature of the output. The mechanism must include at least some formal representation of every member. The details of this don't matter so much: it might be direct voting (strictly equal power), or it might be a representative system (so long as the reps for each voter are more or less equal in power). And the general nature of the output is that it should be fair. Exactly what fair is, is a good question, and probably varies a lot. But at least this: conflicts should not always be resolved in favor of the same person or group or class.

This is not a particularly well-defined notion; clearly it does not resonate with you, who want a stricter definition. But it is hardly a meaningless notion, either. It is not an applause sign.

It is also, I think, a much more useful concept than you seem to have in mind. You are hung up on specifics: "the resolution process can be a direct majority vote, or an elected legislature, or even a voter-sensitive behavior of an AI, but it has to be something." Yes, in any actual project for developing AI, it would have to be something, and something specific. But specifically which of these methods (or an infinity of other specific implementations of "democracy") did not matter to the speaker you refer to.

Comment author: pnrjulius 19 May 2012 04:32:50AM 0 points [-]

But what is really that it didn't MATTER, or simply that he didn't KNOW?

I think it was the latter---what's more, it didn't even occur to him to ask the question. He seemed to think that saying "democratic" was enough.

Comment author: Miguel 13 September 2007 12:47:21AM 5 points [-]

I know where your quote came from: http://www.time.com/time/magazine/article/0,9171,954853,00.html?promoid

It's from "President Roberto Eduardo Viola, formerly Argentina's army commander in chief".

It's an answer to the first question in the interview:

"Q. How soon do you expect Argentina to be returned to democratic government?

A. We believe we are already within a democratic system. Some factors are still missing, like the expression of the people's will, but nevertheless we still think we are within a democracy. We say so because we believe these two fundamental values of democracy, freedom and justice, are in force in our country. There are, it is true, several conditioning aspects as regards political or union activity, but individual freedom is nowhere infringed in an outstanding manner."

BTW, I googled it. Apparently my Google-fu is better than yours ;) (But I do *applaud* your excellent memory, or ele I wouldn't be able to find it).

And keep up with the great posts. I'm a daly reader of this blog.

- Miguel

Comment author: Alan_Crowe 15 September 2007 07:35:47PM 0 points [-]

[rhetorical pose] We shouldn't balance the risks and opportunities of AI. Enthusiasts for AI are biased. They under estimate the difficulties. They would not be so enthusiastic if they grasped how disappointing progress is likely to be. Detractors of AI are also biased. They under estimate the difficulties too. You will have a hard time convincing them of the difficulties, because you would be trying to pursuade them that they had been frightened of shadows.

So there are few opportunities which are likely to be altogether lost if we hang back through unnecessary fear. [/rhetorical]

Well, I happen to believe the two paragraphs above, but distinct from the question of whether I am right or not is the question of whether the phrase "We need to balance the risks and opportunities of AI." means something or whether it is merely an applause light.

I think it is trivially true that we need to balance the actual risks and actual opportunities of AI. There is room for disagreement about whether we need to balance the perceived risks and perceived opportunities. If perceptions are accurate we should, but there is scope to say, for example, that the common perception is wrong and a rogue AI will in fact be quite stupid and easily unplugged. This opens the way to a decoding of language in which

o We need to balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities correctly and

o We shouldn't balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.

One question that I dwell on is "how do intelligent and well-intention persons fall to quarrelling?". The idea of an Applause Light is illuminating, but I think it is also quite tangled. There is the ambiguity between whether a phrase is an Applause Light or a Policy Proposal. I suspect that the core problem is that it is awfully tempting to exploit this ambiguity rhetorically, deliberating coding ones policy proposals in language that also functions as an Applause Light so that they come across as obviously correct.

The fun starts when one does this subconsciously and some-one else thinks it is deliberate and takes offence. Once this happens there is little chance of discovering the actual disaggreement (which might be about the accuracy of risk assessments) for the conversation will be derailed into meta-conversations about empty phrases and rhetoric.

Comment author: bigjeff5 31 January 2011 04:45:36PM 2 points [-]

o We need to balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities correctly and

o We shouldn't balance the risks and opportunities of AI.

is the position that we are assessing the risks and opportunities incorrectly and should follow a different path from that indicated by our inaccurate assessments. Such a position needs fleshing out with a rival account of the risks and opportunities.

I don't get that at all. If "We shouldn't balance the risks and opportunities of AI" means they are being assessed incorrectly, isn't that a part of balancing the risks and opportunities of AI? I don't see how you can get that out of the statement. If they are being done incorrectly, then in the discussion of the risks and opportunities you say "No, you're doing it wrong, you need to look at it like this blah blah blah".

When you say "We shouldn't balance the risks and opportunities of AI" it means to stop making an assessment altogether. It says nothing about continuing to go forward with the project or not. It doesn't say "Stop the project! This is all wrong!" That would fall under balancing the risks and opportunities - an assessment that came against AI.

That's foolishness, which is why no one would ever utter the phrase in the first place. That makes the prior phrase an applause phrase, because it is obvious to anyone involved that such an assessment is necessary. You're only saying it because you know people will nod their head in agreement and possibly clap.

Comment author: Polymeron 23 February 2011 03:38:48PM 1 point [-]

It would make sense in the context of a strong bias toward a specific outcome, e.g. religious indignation toward an idea.

A person believing that thinking machines are an abomination would tell you to stop assessing and forget the whole idea. A person believing that AI is the only thing that could possibly rescue us from imminent catastrophe might well tell you to stop analyzing the risks and get on with building the AI before it's too late.

Either position would have a substantive position that you don't need to balance the risks and opportunities any further, without claiming that you have some error in your assessment.

Comment author: bigjeff5 23 February 2011 06:06:35PM 0 points [-]

Yet building an AI that eventually destroys all mankind, even after it averts this particular looming catastrophe, could easily be the worse choice. Does the catastrophe we need AI for outweigh the potential dangers of a poorly built AI?

It must still be considered. You may not have time to consider it thoroughly (as time is now a factor to consider), and that must be part of your assessment, but you still have to weigh the new risks against the potential reward.

Same with the abomination. Upon what basis is it an abomination? What are the consequences if we create the abomination? Do we spend a few extra years in purgatory, or do we burn in hell for all eternity?

It still must be considered. A few years in purgatory for a creation that saves mankind from the invading squid monsters may very much be worth doing.

Consider the atomic bomb before the first live tests. There were real concerns that splitting the atom could create an unstoppable chain of events which would set the very air on fire, destroying the whole world in that single moment. I can't really imagine a scenario that is more dire, and more strongly argues for the ceasing of all argument.

Yet they did the math anyway, considered the risks (tiny chance of blowing up the world) vs the reward (ending the war that is guaranteed to kill millions more people), and decided it was worth it to continue.

I still see no rational case for ever halting argument, except in the case of time for assessment simply running out (if you don't act before X, the world blows up - obviously you must finish your assessment before X or it was all pointless). You may weigh the risks vs the opportunities and decide the risks are too great, and decide not to continue. However, you can not rationally cease all argument without consideration because of a particularly strong or dire argument. To do so is irrational.

Comment author: Polymeron 23 February 2011 06:39:38PM 0 points [-]

Of course you can cease argument without consideration - if you deem the risks of continuing consideration to outweigh the benefits of weighing them. For instance, if you have 1 minute to try something that would save your life, and you require at least 5 minutes to properly assess anything further, you generally can't afford to weigh whether the idea would result in a worse situation somehow - beyond whatever assessment you have already made. At that point, the time for assessment is over.

For the most part, however, I agree with your point. I did not argue that one can rationally disagree with the statement "We need to balance the risks and opportunities of AI"; just that they can sincerely say it, and even argue for it. This was a response to you saying that "no one would ever utter the phrase in the first place". This just strikes me as false.

Never underestimate the power of human stupidity ;)

Comment author: bigjeff5 23 February 2011 10:25:25PM 0 points [-]

You're right, in that regard I was certainly mistaken.

Comment author: Venkat 24 September 2007 11:59:40PM 1 point [-]

That was kinda hilarious. I like your reversal test to detect content-free tautologies. Since I am working right now on a piece of AI-political-fiction (involving voting rights for artificial agents and questions that raises), I was thrown for a moment, but then tuned in to what YOU were talking about.

The 'Yes, Minister' and 'Yes, Prime Minister' series is full of extended pieces of such content-free dialog.

More seriously though, this is a bit of a strawman attack on the word 'democracy' being used as decoration/group dynamics cueing. You kinda blind-sided this guy, and I suspect he'd have a better answer if he had time to think. There is SOME content even to such a woolly-headed sentiment. Any large group (including large research teams) has conflict, and there is a spectrum of conflict resolution ranging from dictatorial imposition to democracy through to consensus.

Whether or not the formal scaffolding is present, an activity as complex as research CANNOT work unless the conflict resolution mechanisms are closer to the democracy/consensus end of the spectrum. Dictators can whip people's muscles into obedience, maybe even their lower-end skills ("do this arithmetic or DIE!"), but when you want to engage the creativity of a gang of PhDs, it is not going to work until there is a mechanism for their dissent to be heard and addressed. This means making the group itself representative (the 'multinational' part) automatically brings in the spirit if not the form of democratic discourses. So yes, if there are autocentric cultural biases today's AI researchers bring to the game, making the funding and execution multinational would help. Having worked on AI research as an intern in India 12 years ago, and working today in related fields here in the US, I can't say I see any such biases in this particular field, but perhaps in other fields, making up multinational, internationally-funded research teams would actually help.

On the flip side, you can have all the mechanisms and still allow dictatorial intent to prevail. My modest take on ruining democratic meetings run on Robert's Rules:

The 15 laws of Meeting Power

Comment author: Jamais_Cascio 10 October 2007 04:35:51PM 9 points [-]

C'mon, Eliezar, be fair: identify who the speaker was that you "probed" in this way, so that people can find the recordings of the talk and exchange at singinst.org to decide for themselves how it went.

As you have it above, aside from the paraphrasing, you omit a couple of important parts of my replies. With regards to the Reason/Hollywood comparison, I go on to say:

"That is, they're both caricatures, and neither one is terribly plausible or complete. There would be some critical benefits to the messy process of the first scenario, and some important drawbacks to the second."

With regards to the "I don't know," I then say:

"This is a point I've tried to make a couple of times here: this is not a solved problem, but it's an important problem, and we need to figure out how to address it."

I certainly did not talk about democracy with any intent of it serving as "applause lights" for my talk -- in fact, given the audience, I expected a semi-hostile response, given my argument against the kind of "rebel nerd" heroism self-image a lot of the AGI community seems to have.

Comment author: Eliezer_Yudkowsky 10 October 2007 05:15:16PM 9 points [-]

BTW, if anyone wants to go to singinst.org and download the audio, you'll note that the actual event did not occur the exact way I remembered it, which should surprise no one here who knows anything about human memory. In particular, Cascio spontaneously provided the Genome Project example, rather than needing to be asked for it.

Generally, the reason I avoid identifying the characters in my examples is that it feels to me like I'm dumping all the sins of humankind upon their undeserving heads - I'm presenting one error, out of context, as exemplar for all the errors of this kind that have ever been committed, and showing none of the good qualities of the speaker - it would be like caricaturing them, if I called them by name.

That said, the reason why I picked this example is that, in fact, I was thinking of Orwell's "Politics and the English Language" while writing this post. And as Orwell said:

In the case of a word like democracy, not only is there no agreed definition, but the attempt to make one is resisted from all sides. It is almost universally felt that when we call a country democratic we are praising it: consequently the defenders of every kind of regime claim that it is a democracy, and fear that they might have to stop using that word if it were tied down to any one meaning.

If you simply issue a call for "democracy", why, no one can disagree with that - it would be like disagreeing with a call for apple pie. As soon as you propose a specific mechanism of democracy, whether it is Congress passing a law, or an AI polling people by phone, or government funding of a large research project whose final authority belongs to an appointed committee of eminent scientists, et cetera, people can disagree with that, because they can actually visualize the probable consequences.

So there is a tremendous motive to avoid criticism, to keep to the safely vague areas where people will applaud you, and not to make the concrete proposals where people might - gasp! - disagree.

Now I do not accuse you too much of this, because you did say "Genome Project" when challenged instead of squirting out an immense cloud of ink. But it is why I challenged you to define "democracy". I think that the real value in these discussions comes from people willing to make concrete proposals and expose themselves to criticism.

Comment author: Fyrius 22 April 2010 06:49:22AM *  17 points [-]

Hum.

When I hear the sort of thing you would call "applause lights", I don't always think of that as an obvious fact that everyone in their right mind would agree on. Rather, I get the impression the speaker is implying that someone they strongly disagree with does believe this obvious fact is not true, or that this ridiculous notion is.

If for example I hear someone say "we shouldn't be hugging criminals, we should be locking them up", I interpret that as a very one-sided opposition to a grossly misrepresented opponent who goes a bit easier on convicts. Of course this person wouldn't literally believe the reverse that "we should be hugging criminals instead of locking them up", but she might believe something that a bigot could paraphrase as such with a straight face.

I think this is also the reason why the speaker's supporters applaud to statements like that - it implies the issue is very simple and clear-cut, only one side (ours) is remotely sensible, and you'd have to be insane to disagree. One-sidedness feels good. Very blatant one-sidedness feels even better.

(Excuse me if this has been said already.)

Comment author: NancyLebovitz 22 April 2010 09:59:12AM 3 points [-]

I haven't seen it laid out so clearly anywhere.

The only thing I'd add is that it's very easy to fall into that error reflexively. It isn't generally a matter of conscious strategy.

Comment author: pnrjulius 19 May 2012 05:43:54AM 0 points [-]

Hence, an applause light is a form of strawman argumentation?

That sounds about right actually.

Comment author: Fyrius 31 May 2012 12:45:56PM 0 points [-]

It can be used for that, at least.

Comment author: DSimon 14 June 2010 03:42:49PM 6 points [-]

(Hi everyone; this is my first time posting here.)

If someone delivered that 100%-applause-light paragraph to me in a speech, my first impulse would be to interpret it as an honest attempt to remind the audience of obvious but not necessarily currently-in-context ideas. For example, this statement from the middle:

"To achieve these goals, we must plan wisely and rationally. We should not act in fear and panic, or give in to technophobia; but neither should we act in blind enthusiasm."

Taken literally as a set of assertions, this really is quite empty of novel or unexpected content. However, directed at an audience of humans, aware of but still vulnerable to cognitive bias, the statement above implies another statement which is more useful: "We should be careful to not act like <group X> who, despite intending not to, panicked rather than thinking productively. We should also be careful to not act like <person Y> whose enthusiasm overwhelmed their necessary sense of caution, even though they knew the value of that caution."

People who agree with the part of the 1st virtue that says "A burning itch to know is higher than a solemn vow to pursue truth" may still sometimes need to be reminded to check themselves and make sure they're doing the former rather than the latter.

Comment author: Document 07 December 2010 07:28:33AM *  2 points [-]

This sounds similar to the idea of a "motherhood statement" as defined here.

Comment author: pnrjulius 19 May 2012 05:45:32AM 2 points [-]

That second definition applies to most depictions of transhumanism in fiction. It's the rare author who is bold enough to say, "The implants that we put in our brains? Yeah, they actually make us better."

Comment author: TheOtherDave 19 May 2012 03:00:51PM 1 point [-]

Pretty much all the fiction I read in which brain implants are mentioned at all treat them as improvements.

Comment author: pnrjulius 23 May 2012 03:57:22AM 1 point [-]

Really? Got any examples?

I've read some in which the transhuman technologies were ambiguous (had upsides and downsides), but I can't think of any where it was just better, the way that actual technologies often are---would any of us willingly go back to the days before electricity and running water?

Comment author: Swimmer963 23 May 2012 04:05:02AM 1 point [-]

I've read some in which the transhuman technologies were ambiguous (had upsides and downsides), but I can't think of any where it was just better, the way that actual technologies often are---would any of us willingly go back to the days before electricity and running water?

Having upsides and downsides isn't the same thing as being ambiguous. Running water and electricity do have downsides–namely, depletion of water tables due to overuse, and pollution, resource depletion, and possibly global warming due in part to the efforts required to make electricity...But I wouldn't say that either technology is ambiguous. The advantages pretty clearly outweigh the disadvantages, which are avoidable with some thought and creativity.

Comment author: TheOtherDave 23 May 2012 04:32:57AM 1 point [-]

Most of Peter Hamilton's stuff comes to mind, for example. Implants are just another technology, treated no differently than guns or cars. The Greg Mandel books have a few characters who do end up with implants that they would prefer not to have, but they're the exceptions.

Comment author: Nornagest 23 May 2012 07:04:31AM 1 point [-]

would any of us willingly go back to the days before electricity and running water?

Well, they're hardly common, but anarcho-primitivists do exist.

Comment author: Hul-Gil 23 May 2012 07:07:32AM 0 points [-]

but I can't think of any where it was just better, the way that actual technologies often are

I find that a little irritating - for people supposedly open to new ideas, science fiction authors sure seem fearful and/or disapproving of future technology.

Comment author: Nornagest 23 May 2012 07:23:17AM 1 point [-]

Part of me thinks that that's encoded into the metaphorical DNA of the SF genre (or one branch of it) at a very basic level. It's been conventional for a while to think of SF as Enlightenment and the rest of spec-fic as Romantic, but the history of the genre's actually more complicated than that; Mary Shelley, for example, definitely fell on the Romantic side of the fence, and later writers haven't exactly been shy about following her lead. The treading-in-God's-domain motif is a powerful one, and it's the bedrock that an awful lot of SF is built on.

Comment author: taelor 23 May 2012 10:54:28AM 0 points [-]
Comment author: bigjeff5 31 January 2011 06:11:31AM 3 points [-]

Oy, now that you've said it, I hear speeches like that at the end all the time. Whole discussion between opposing sides even. Perhaps that's why I haven't been able to stand cable news for a while now?

Comment author: Polymeron 23 February 2011 03:30:24PM *  11 points [-]

When I first read this, I imagined a favorite politician (I won't mention who) giving this mock speech.

To my embarrassment, I found myself nodding in completely genuine enthusiasm. This guy clearly knows what he's talking about!

(This in turn made me consider just how much of this politician's speeches was similarly composed. I came to the conclusion that quite a significant amount of it was)

...Nobody ever told me cognitive bias would be this annoying!

Comment author: TheOtherDave 23 February 2011 05:57:52PM 3 points [-]

Upvoted because I endorse the willingness to notice one's own biases.

So, next question, if you're willing: what are three things you could do to reduce the degree to which this sort of empty rhetoric leads you to endorse the speaker?

Comment author: Polymeron 23 February 2011 07:02:48PM *  5 points [-]

TheOtherDave, that is a very constructive approach :)

I am already prone to requiring policy specifics from politicians and being dissatisfied with vague points. But one thing I (and many others) do have is a tendency to note, when hearing a few specifics in a sea of "general direction" applause cues, is that my own preference for solutions is compatible with the speech; and from compatibility, I get hope that they would implement it - despite a lack of evidence that they're even aware of such a solution, much less want to implement it. So this is something to be cautious of and to note mid-speech.

I could go further and try to strike from mental record anything that isn't specifics, making a point-by-point list of substantive statements. An easy way to do this is ask "is anyone really considering doing otherwise? No? Then it doesn't count. Yes? Then why are they?" This method might not always be wise - motivations and beliefs are also important in trying to predict a politician's future choices they did not yet address, and the speech can pronounce those. However it would be a good mental exercise when trying to evaluate positions on a specific policy question.

Lastly, try to separate emotional jargon from actual policy. If your politician says we "need to be prepared for the 21st century", recognize the fuzzy excitement that this statement gives you and squash it - it's caused by the phrase "21st century" being linked in your mind with progress and technology. Wait until that politician says they're going to specifically invest in technological literacy of 8th graders before you give it any significance, and treat it as suspect until then. (This is very similar to the first thing I suggested, except it focuses on recognizing an immediately triggered emotion in response to a phrase, rather than your own mind building scenarios which then in turn excite you).

I'll try to remember all that for the next speech I hear :P

Comment author: TheOtherDave 23 February 2011 08:12:33PM 4 points [-]

I definitely endorse tracking specific proposals/substantive assertions, and explicitly labeling vague or empty assertions that nevertheless elicit positive feelings or invite you to project your own preferences onto the speaker.

I definitely endorse asking the "is anyone really considering doing otherwise, and why?" question.

Something I also find useful is explicitly labeling implied affiliations.

E.g., consider the difference between "we need to prepare our children with the tools they need to be leaders in the 21st century," versus "we need to instill our children with the values they need to make the right choices in the 21st century." They are both empty statements -- I mean, who would ever claim otherwise? -- but in the U.S. today the former signals affiliation with teachers and thereby implies support for public schools, education funding, etc., while the latter signals something I understand less clearly.

And those in turn signal alliances with major political parties, because it's understood by most U.S. voters that party A is more closely tied to education and party B to values.

In fact, even if the statement includes a specific proposal, it is often worth labeling the implied affiliation.

Comment author: pnrjulius 19 May 2012 05:48:42AM 0 points [-]

It's interesting; with the connotations and associations in our discourse, I can actually make some predictions about planned policies from those two supposedly "empty" statements.

The former is probably going to spend more money on math and science education.

The latter is probably going to fund "faith-based initiatives" or something similarly silly and religious (but I repeat myself), because "values" in American politics is almost always code for "conservative Evangelical Christianity".

So does this mean that they really aren't empty at all?

Comment author: TheOtherDave 19 May 2012 02:47:11PM 0 points [-]

Well, yes, I chose those statements precisely because of their connotative affiliations.

As for whether they're really empty... (shrug).

In ordinary conversation I would consider "I like likable things!" an empty statement, but of course it conveys an enormous amount of information: that I am capable of constructing a grammatical English sentence, for example, which the Vast majority of equivalent-mass aggregations of particles in the universe are not. I can use a different term to describe that category of statement if this one is too ambiguous.

Comment author: potato 02 August 2011 08:36:55PM *  1 point [-]

"Applause Light" is a wonderful name for that tactic; it's funny, catchy, and makes the problem with that tactic intuitively obvious. That term should be further proliferated throughout the internet if it hasn't already been. Adding that meme to the average internet goer's repertoire could have wonderful side-effects on the support decisions of people in meat space everywhere.

Comment author: MarkusRamikin 11 August 2011 09:50:01AM 3 points [-]

That applause-light speech at the end just needs some variation, and I'm pretty sure it would fly. I'd replace about half of the "we should" with something else, like "it is important that we", and "it would be dangerous to neglect" and so on, because right now it's so repetitive that surely a lot of people would notice and realize what's being done.

Or maybe i'm yet again overestimating my fellow human beings, as past experience says I am prone to do...

Comment author: Jeremygbyrne 21 August 2011 04:08:16AM *  1 point [-]

I am tempted to give a talk sometime that consists of nothing but applause lights

http://www.youtube.com/watch?v=pxMqSdgB-uA (appropriately titled "Unthink").

Comment author: jeromeapura 25 August 2011 11:37:59AM 0 points [-]

You got him on nice Socratic question. Well, a good question seeks good idea and eliminate inane idea. Nice

Comment author: Matvey_Ezhov 03 September 2011 10:27:22AM 0 points [-]

There might be one another case: in casual conversation, something that looks like an applause light, could be just expression of recent insights of a particular person on the subject. Like, he just yesterday deduced (based on some fragments of rational texts on the web) that we should balance risks and opportunities of this. Or, maybe the audience level on the subject is so low that even the applause-like statements do convey some information.

Comment author: buybuydandavis 22 September 2011 11:31:24AM 0 points [-]

"Let's do everything right."

Yep. Standard political speech.

Comment author: Maha 14 December 2011 12:32:56AM 1 point [-]

I don't think these statements are entirely vacuous. Even when their content is little more than a tautology, their actual meaning is something else entirely, at least in politics; they represent that the speaker is aware of the jargon, willing to use it, essentially moderate/"pragmatic" and prone to maintaining the status quo.

Comment author: macronencer 16 January 2012 11:37:26PM 0 points [-]

I couldn't resist adding another link as an example of a speech that seems to consist almost entirely of applause lights. This one is vintage Peter Sellers.

http://www.youtube.com/watch?v=GxBtGuu9BVE

Comment author: taelor 24 February 2012 11:03:51PM 2 points [-]

Applause Lights also have more sinister, dark artsy application: they can be used to bait people into agreeing with seemingly trivial propositions, which nevertheless cause the target to modify their self image, rendering them more likely to agree with less trivial propositions in the future. For example, Cialdini's Influence reports on a study that found that households that had been visited by a volunteer collecting signatures in favor of the vague statement "keep California beautiful" (without ever specifying how this was to be accomplished) were much more likely to agree to prominently display a large, ugly sign reading "prevent drunk driving" on their yards than households that hadn't been so visited.

Comment author: olalonde 27 April 2012 05:09:57PM 0 points [-]

can convey new information to a bounded rationalist

Why limit it to bounded rationalists?