Benevolent and malevolent media producers possess the power to influence society in positive and negative ways. They can do this through agenda setting, framing, priming, spreading memes, altering perceptions of groups and individuals, outright propaganda, and other methods.

I think more attention should be paid to the pathways from content to effects, so that we can optimize our cultural landscape.

Lest this post soon turn into Applause Light Vegas, I’ll now get into some methods I think can be used to sway mass opinion in a direction amenable to making the world better. Many of these methods deal with familiar biases, heuristics, and psychological effects.


Media Use Facilitating Positive Social Change

First, the mass media possesses the power to alter estimates of the likelihood and frequency of specific occurrences. Think back to some of the classic examples of the availability heuristic. When asked to estimate the number of homicides in the USA compared to suicides, people answer that there are far more homicides in the United States, even though the reverse is true. The mass media report on homicides far more often than they report on suicides, so people have more available instances of homicide in their memories and these come to mind more easily. This influences their beliefs about the real world, which can then be politicized to lead to different stances on gun control and education. The priorities of a culture with a homicide problem are not the priorities of a culture with a suicide problem.

This effect is consistent with some theoretical models of the mass media’s impact on society. Cultivation theorists understand the media, especially television, as a system of coherent memes and messages reflecting a society’s dominant ideology. If we accept the fundamental claim of cultivation theory then we should hypothesize exposure to television to be positively correlated with status quo beliefs and attitudes. We might then expect high exposure to non-fiction television to lead to mean- and scary-world beliefs, given the disproportionate amount of media coverage homicides receive. One cultivation theorist found this result, yet did not find the same effect on heavy fiction viewers.

Malevolent, benevolent, and clueless media producers could capitalize on the availability heuristic to adjust mass estimates of society’s biggest problems, and by extension, mass assessments of social priorities. This ability of the mass media to affect the perceived importance of subjects by representing or not representing them is sometimes called agenda setting. Want different kinds of people to take existential risks seriously? Maybe get existential risks mentioned in media outlets that different types of people read. Use Medium.com, pitch articles to The Guardian, write an editorial to your local newspaper, increase the representation of important issues on Wikipedia, and so on. You don’t even need to convince people that Friendly AI should be a global priority as much as you need to convince them that thinking so doesn’t make you crazy. Exposing people to AI concerns without coming off as a clear member of a disliked outgroup (e.g. conspiracy theorist) can play a big role in legitimizing the issue in the public’s eyes. 

Politicians and media outlets can also make use of framing devices to influence audience perspectives on news stories by tweaking irrelevant factors. A newspaper headline claiming, “Public condemnation of democracy should not be allowed” will receive more support than will one that claims, “It is right to forbid public condemnation of democracy.” If you’ve ever heard a politician speak, you’ve probably noticed how they frame everything they say in a way that makes it sound better than it is. Similarly, a headline will have very different connotations if it describes an event as a “strike” or as an “invasion” or as a “bombing.” (And was it committed against “soldiers” or “forces” or “rebels” or “terrorists”?)

Framing isn’t purely a word-selection thing. It can be done with audio-visual media as well. Film fans among you may have heard of the Kuleshov effect, discovered with a famous experiment that used and re-used a single close-up of a man’s face against a series of different images, such as a bowl of soup, a little girl smiling, a funeral. You can watch a short example here. Each time we cut back to the man, his face appears to express a different emotion even though it’s actually an identical shot of his face. Soviet Montage filmmakers capitalized on this effect in their movies to express meanings through the juxtaposition of different shots.

 

Biases Facilitating Social Stagnation

Media producers have much more to think about than the biases and heuristics that facilitate persuasion. They also have to examine the psychological and cultural factors that entrench ideas in our heads. The mind is the Hotel California of ideas – once one gets in there, it might never see the light of day again. What are some of those forces that keep us from changing our minds?

The first important factor to consider is selective exposure. Before an idea can sound persuading to you, it has to get in front of you. This is harder than it seems because people don’t want to be confronted with ideas they don’t agree with. Confirmation bias predisposes them to crave ideas they already agree with. If I’m an atheist surfing YouTube, am I going to click on “Creationist moron DESTROYED with a Hitchslap” or on “How to prove atheism wrong in 8 seconds”? People avoid the stuff that doesn’t seem like it’ll cater to their beliefs.

If you want to get existential risk and AGI messages in front of new audiences, you need to find ways to make your stances on those issues seem somewhat consistent with a lot of other peoples’ current views. Getting important undercovered ideas into the public eye will probably mean smuggling them there. A TV station only covering existential risks can easily be ignored by all the people with no interest in existential risks. (LessWrong is sort of an online equivalent to this.) Instead, you may have to smuggle your important ideas into a mixture of more mainstream content.

The primacy effect suggests that the earliest information people receive about an issue is likely to form their thinking on that issue, biasing them in favour of that viewpoint. This suggests to me that it might be a good idea to find subjects on which people haven’t completely formed their ideas yet. If you can give people good ideas before they get a chance to form bad ideas, they’ll be more partial to your ideas than if you try to convince them that their fully-formed "bad" ideas are inferior to yours. My impression is, relative to secularism and the dangers of technological progress, that fundamental anti-speciesist and effective altruist ideas are subjects on which people are still forming their ideas.

Mass media agenda setting also works in combination with other biases. The third person effect is the occurrence of people overestimating the magnitude of the media’s influence on other people. Do you ever assume that a political attack ad, or a marketing pitch, or porn, or a violent video game probably affects a lot of people – while being very confident that you aren’t one of those people being affected? You, of course, are much too clever but those other people are surely easy pickings for propagandists. This view is probably closely related to overconfidence and the bias blind spot.

Davison points out that the third person effect can play into pluralistic ignorance. Misperceptions of public opinion can lead to the majority reinforcing behavioural norms that only the minority of a population agrees with. If we assume that the majority of persuasion tactics we see in the media are successful on other people, then we’re going to wind up with skewed ideas of what everyone else believes. In our current era of demassification and social media, we live in so-called “cyber ghettos” where most of our information comes from people on our social media feeds and others that already agree with us. This probably leads to an overestimation of the popularity and mainstream-ness of our ideas.

 

Media Use Facilitating Negative Social Change

So, all sorts of biases and heuristics prevent ideas from leaving Hotel California - and on the scale of a culture, this creates memetic stagnation. Now, let’s look at how the mass media can be deliberately used to create negative change. Understanding how this works can help us squash the deliberate spread of misinformation or improve our more benevolent methods of media persuasion.

There is a field called “agnotology” that is about exactly this: ignorance, how and why it’s produced and maintained. When examining a field, a good indicator of whether there’s disinformation at work is whether there exists a divide between expert opinion and public opinion. My impression is that this is the case for many of the issues that interest LessWrongers.

The strategies of disinformation are well known to those familiar with the debates on climate change and evolution by natural selection. One strategy is to assert the absence of scientific consensus by citing the dissenting opinions of scientists in unrelated fields. Another is to point out past blemishes on science’s track record, often using examples taken from the popular literature, rather than from peer-reviewed academic journals. Finally, deceivers can draw attention to fringe parts of a theory that are indeed controversial rather than focusing on the core tenets of the theory that are widely accepted by experts.

Along the same lines as the availability heuristic, media coverage can alter estimates of the extent of scientific consensus on empirical facts. News programs can intentionally or unintentionally contribute to the appearance of scientific controversy by treating both sides of an argument equally, creating the illusion of equal credibility. Further, in attempting to make science palatable to mass audiences, the mainstream media may inadvertently oversimplify or misrepresent scientists, thereby spreading misunderstanding. As a result, it might make more sense to write your own articles instead of going through a middleman that is knowledgeable about journalism but not about your topic.

Hotel California only truly shuts its doors once you’ve left the front lobby and gone up to your room. New information is very briefly “believed” before it is rejected. When reading a novel, we do not suspend disbelief as much as we willingly construct disbelief immediately after believing. But it doesn’t feel that way to the Rider on the Elephant. Sometimes we err in figuring out which ideas have gone up to their rooms and which have exited Hotel California immediately after entering the lobby. There is a whole literature on narrative persuasion – how fiction can lead to the absorption of false beliefs. After reading fictional narratives including statements like, “mental illness is contagious” and “brushing your teeth doesn’t actually make your teeth cleaner,” people are more likely to reproduce those errors on future tests. This effect is even stronger after a two-week gestation period, revealing an absolute sleeper effect.

Even when a retraction immediately follows a statement, it usually fails to eliminate the initial effect. If I tell you, "Woody Allen's real name is Jacob Allen," then I have just poisoned your mind in a sense even if I immediately tell you I made the name up. If you were on Who Wants To Be A Millionaire and Jacob was one of the options, it would sound more familiar to you than the alternatives even though I'm making it perfectly clear I have no idea what Woody Allen's real name is. For all I know, it's Woody Allen.

One reasonable explanation of this phenomenon is that listeners form mental models of the stories they hear (e.g. Event A leads to Event B leads to Event C). When one of the events of the story is retracted (“Actually, I lied: Event B never happened!”), the listener’s mental model is left with a gaping hole, as Event A would not lead to Event C without the prior occurrence of Event B. Filling this coherence gap with an alternate account of events is a confirmed way to break the continued influence of misinformation. Many other helpful techniques are offered here.


More Ideas for Optimizing Media Use

One good tool for world-changers to have is a list of memes they’d like to spread to a larger audience. Since our uncertainty about the future is high, the selected memes should be very safe messages that are difficult to be abused or to lead society in bad directions if accepted en masse. For instance, a meme like "Technological progress is good" may be generally true but it could easily lead to untrue beliefs or bad consequences if accepted too dogmatically. In contrast, "Racism is bad" seems almost impossible to be misused.

Examples of "safe" memes I would expect to have net positive consequences:

Racism is bad
Sexism is bad
Speciesism is bad
Homophobia is bad
Xenophobia is bad
Belief without evidence is bad
Recycling is good
Defining one’s terms before an argument is good
It’s important to be willing to change one’s mind
One should learn the basic skills of rationality
A lack of absolute certainty does not equal a lack of objectivity
Moral reasoning can be useful
Etc.

The point of a fluctuating list of good memes is that it hierarchizes ideas and causes one to consider how likely specific memes are to be misinterpreted or misused. It also prevents one from getting off track. If you have a list of memes in mind, you can use it to guide your creative decision-making.

It could also be helpful to focus on specific political issues that are hot at a given time. For example,

Party X should win the election
War X should not happen
Apartheid X should be stopped
Abortion should(n’t) be legal
Gay marriage should(n’t) be legal
Capital punishment should(n’t) be practiced
Gun control laws should be stricter/left alone
Climate change should(n’t) be taken seriously
The rich should(n’t) be taxed more
Etc. 

Some of these issues might be far less important than the media and politicians make them seem, but knocking them down, one by one, could probably pave the way for more meaningful change. Perhaps more importantly, they win a battle of principles and prevent the tides from gaining momentum in the opposite direction.

More specific to the issue of an intelligence explosion, the uncanny valley hypothesis suggests that people experience revulsion at the sight of a humanlike-but-not-human thing. This suggests that if one wishes to spread general resistance toward the development of AGI, it would be wise to make a point of associating AGI with these ugly humanoid depictions. On the other hand, if one wanted to spread general acceptance of AGI, it would be good to avoid such depictions.

Another approach is culture jamming. Culture jamming usually means “subvertising” ads by creating TV commercials and billboards that turn corporate ads on their head. Click here for some basic examples. These campaigns build cynicism against corporations and politicians, fuel dissent, and prime people for more world-changing behaviour.

It’s also important to consider the audience of a given message. The average person may not need a reminder to develop their social skills or learn how to communicate, but maybe the average LessWronger probably does. Similarly, there’s no need to convince rationalists that atheism is acceptable because they already believe so – but it remains, I think, a good meme to spread to the broader public. The outward image of activists to the public should consist mainly of moderate, socially acceptable ideas. These topics are not necessarily more important than the more esoteric topics, but they are more likely to be memetically effective because they are consistent with a wide number of outlooks.

Lastly, an important tool for social change is the “nudge” because it guides people toward better decision-making without removing their freedom to choose. The clearest cases where nudges are effective in shaping culture involve appeals to social proof.

Some examples from Thaler and Sunstein’s book: obesity is socially contagious, federal judges are influenced by the votes of their colleagues, 12% of participants choose “subversive activities” as the biggest current issue when asked in private compared to 48% when asked publically, self-reported musical taste is hugely influenced by the self-reported tastes of others, the amount of food people eat correlates with the number of people they eat with, tax and recycling compliance can be increased by informing people that the compliance level is high, binge drinking and smoking rates can be reduced by informing the public of unexpectedly low drinking and smoking rates, and people can be nudged to reduce their energy use by informing them that their energy use is above average.

Do you have any others to add to this list? Was there anything useful in this post you didn't already know?

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 1:20 AM

There is a field called “agnotology” that is about exactly this: ignorance, how and why it’s produced and maintained. When examining a field, a good indicator of whether there’s disinformation at work is whether there exists a divide between expert opinion and public opinion.

That sounds naive. If you ask yourself whether there disinformation in the coverage of topic X in the mainstream media, the answer is "yes" no matter the issue. Journalist write stories under tight timetables without much time to fact check. They are also under all sorts of other pressures that aren't about telling the truth as it happens to be.

If you want to get effective altruist, existential risk, and AI messages in front of new audiences, you need to find ways to make your stances on those issues seem somewhat consistent with a lot of other peoples’ current views. Getting important undercovered ideas into the public eye will probably mean smuggling them there.

From what I personally experienced in doing Quantified Self press interviews I don't think that's the case. I see no reason why a journalist shouldn't want to report about effective altruism.

More specific to the issue of an intelligence explosion, the uncanny valley hypothesis suggests that people experience revulsion at the sight of a humanlike-but-not-human thing. This suggests that if one wishes to spread resistance toward the development of AGI, it would be wise to make a point of associating AGI with these ugly humanoid depictions. On the other hand, if one wanted to spread acceptance of AGI, it would be good to avoid such depictions.

AGI is more complicated than being for or against it. We have specific objectives such as increasing FAI research that are complex issues. Making people associate AGI with depictions of ugly humanoids makes them model the problems of AGI wrong.

Use Medium.com, pitch articles to The Guardian, write an editorial to your local newspaper, increase the representation of important issues on Wikipedia, and so on.

I don't see the point of Medium.com. Why should you focus effort on it? On the other hand I agree that the Guardian is a good place when you want to publish an article about a topic. Editing Wikipedia in a way that important topics get proper representation seems to be effective.

As far as the local newspaper goes, I'm not sure. It might depend on how local it happens to be. Blogs that make their money by having articles with high click rates might be more accessible than local newspapers.

I personally got into the position of doing QS press work by accident and it's German media. I can't tell you how to replicate what I did. If however can point you to Ryan Holiday's book "Trust Me, I'm Lying: Confessions of a Media Manipulator". In it he explain how you can get press attention in the US for whatever issue you want to put into public consciousness. Ryan is very much worth reading because he's not giving you ivory tower ideas. He's also not giving you some ineffective "culture jamming". He's giving you the strategies that he used among other things as the marketing manager of American Apparel.

If you want to effect the US public debate then read the book. Even if you just want to understand how the US public debate works, read it.

[-][anonymous]8y00

Medium's team went and created thegrid.io recently. I signed up after it exploded in popularity without any delivered product. People have had access to the beta since and videos have gone up...basically it's shit. I regret signing up for the grid. How underwhelming. I haven't even been given ac.cess to their beta yet...what a joke. Artificial intelligence in blogging? Pffft. Waste of my money. I learned a lesson there.

That sounds naive. If you ask yourself whether there disinformation in the coverage of topic X in the mainstream media, the answer is "yes" no matter the issue. Journalist write stories under tight timetables without much time to fact check. They are also under all sorts of other pressures that aren't about telling the truth as it happens to be.

Yes, it's ubiquitous, but some fields and issues are more affected than others, usually due to politicization. Tight timetables may apply to all stories but not all pressures do.

From what I personally experienced in doing Quantified Self press interviews I don't think that's the case. I see no reason why a journalist shouldn't want to report about effective altruism.

You're right that effective altruism isn't so radical that a broader public wouldn't take interest in it. I probably shouldn't have included it alongside existential risks and AGI. I'm editing my post to remove it from that sentence.

AGI is more complicated than being for or against it. We have specific objectives such as increasing FAI research that are complex issues. Making people associate AGI with depictions of ugly humanoids makes them model the problems of AGI wrong.

As you already suggested, oversimplification and distortion are routine parts of journalism. Limiting yourself to coverage appropriately modelling the problems of AGI essentially means exiling yourself from news sources that people unlike yourself want to read. My suggestion is also a kind of cheap marketing trick or flourish rather than a full on FAI outreach campaign. I'm not all that confident this trick would accomplish anything.

I don't see the point of Medium.com. Why should you focus effort on it? On the other hand I agree that the Guardian is a good place when you want to publish an article about a topic. Editing Wikipedia in a way that important topics get proper representation seems to be effective.

Medium.com wasn't selected for its being optimal - it's just a random example of a website you could post to with a very different viewership. I agree that The Guardian and Wikipedia are better bets.

I however can point you to Ryan Holiday's book "Trust Me, I'm Lying: Confessions of a Media Manipulator".

Thanks. I'll check this out.

You're right that effective altruism isn't so radical that a broader public wouldn't take interest in it. I probably shouldn't have included it alongside existential risks and AGI. I'm editing my post to remove it from that sentence.

I don't think trying to be moderate instead of radical is the right way to think. If you can give a journalist a story about how a radical movement does X and polarizes the world, than you are giving that journalist what he wants. That's sort of what I did when giving QS presswork.

Take 3D-printer guys like Bre Pettis. Does the average person care about having a 3D-printer at home? No, the average person doesn't. Yet Bre is a master at telling a story about how bringing the means of production finally to the average person so that the means of the production aren't in the hands of capitalists but in the hands of the people.

That's not the only story he tells. If you look at his Ted talk he begins with getting the people by talking about the school system. He begins by saying that you can't teach creativity by teaching to the test. That's a meme that resonates with a bunch of people. Then he tells a story about how the maker bot helps people express their creativity. The next meme he pushes is that if you make something yourself it's yours. It's better than buying it off the shelf. He brings back education and announces the MakerBot as the solution for the flawed education system. Half way into the talk he says he dropped out of school and recommends kids to do the same, then he changes subject again. Later he talks about how teachers need a backround plan when parents come into the school and complain that the childs are having fun because of the MakerBots and that the school administrator should be won for the project and be able to tell the parents what bureaucratic standards are fulfilled by the project.

Bre Pettis manages to beat most 21st century musicians at being cool. I've seen him multiple times live on stage and the way he works the room is amazing. Bre's image does not consist mainly of moderate, socially acceptable ideas.

I talk lately on LW about the status of intelligence people and talked about standing our ground. Bre is the perfect example of how that works in practice. If you appear to bent to the status quo you aren't cool. If you can get away with droping a side remark about how who dropped out of school and kids should do the same you are high status and cool. Especially when you are holding a sales talk that schools should buy your MakerBots and your agenda depends on convincing school teachers.

I'm sure Bre could give a talk on asteroid defense that makes asteroid defense sound really cool and that would motivate journalists to cover it. It probably requires speaking about asteroid defense instead of speaking about the detection of near earth objects but the change in wording isn't costly.

When NASA announces something about going to Mars you could contact news programs and say that you think NASA has it's priorities extremely wrong if it focus on Mars instead of taking asteroid defense seriously. If you have at least some reason why you are an authority on asteroid defense, it doesn't cost the journalist something. Fox news is happy to have someone who slams Obama NASA policy so it's win-win. You get to speak about asteroid defense and they get someone who says that Obama's spending policies are wrong.

Of course Fox news wants someone who's presentable in front of a camera, so you need some public speaking ability and in the ideal have some video openly available. Youtube provides an easy venue to publish videos.

If you want to go there the plan might be, form a little nonprofit with a name related to asteroid defense. Give it a nice website. Be it's president. Publish some YouTube interviews with yourself on it.

Next step might be Guardian's comment is free. Get an article that outlines the benefits of being serious about asteroid defense on it. Then you wait till the next announcement related to NASA policy and contact many news organisations that you can provide them with an angle on the topic that isn't already in the media spot light.

If you are a democrat by heart you can sell it by saying usually I agree with Obama, but on this issue he's really wrong... On the other hand you do have to compromise and accept that you are making a move that doesn't help the agenda of the democratic party. While you are talking about asteroids you might make a more general point about Xrisk without interfering what anyone else wants from the story.

If you want a cool hobby doing that project of asteroid defense PR for two years might be a lot more cool than going on the archery range shooting arrows or glassblowing. Cool in the sense that getting the project to work shows that you are intelligent. Also cool in the sense that you can impress the kind of people you meet at effective altruist events with it.

I also think it's more cool than doing the kind of things you labeled as culture jamming.

I think we might be using different definitions of "radical" and "moderate, socially acceptable." I'm not referring to things that massively impact society, but to things that clash with widely held values and attitudes.

3D printing doesn't strike me as an idea most people negatively associate with "radical." More importantly, even if it was, it is possible to present a "radical" or "weird" or "unfamiliar" idea in a way that it appears not to clash with people's values and attitudes.

That's what I suggest people be cautious to do. When you tell the average American that you fear an artificial agent will destroy humanity in this century, you are going to get mainly aversive reactions - in a way that you won't get by telling people you think 3D printing will revolutionize our socioeconomic structure. Do you disagree with that?

That doesn't doom FAI research to eternal neglect, it just means FAI outreach people need to be cognizant of the fact that they're fighting an uphill battle toward persuasion that most outreach and marketing campaigns don't have to face. As a result, it's important to frame FAI as something consistent with most people's attitudes. That probably involves leaving out certain details. As Bre Pettis demonstrates with the school dropout point, there can be socially acceptable ways to express minority views. He "smuggled" that idea into his talk about a non-offensive subject.

I should also add that I wrote that topic with the idea of a media platform in mind (that's why I made the comparison to LW and not to individual posts on LW). So if you ran your own TV or radio station, I think it would be a better idea to use +Compromise and Smuggle+ than to cover only subjects such as cryonics, transhumanism, wild animal suffering, etc. In the latter case, you may cover topics you believe to be more important, but your station will be too easy for people uninterested in those topics to ignore. If you include some status quo material, you can lure in some unsuspecting listeners that will also catch the less conventional stuff. I think it can be compared to a pharmacy taking a loss on a product ("Toothpaste, $0.25 a tube!) just to get customers in their store, where they'll likely buy other stuff on the shelves.

If you're just submitting an article to a pre-existing news source then, as you say, you don't really need to consider this. The mainstream content is probably the bulk of what they cover, so they'll welcome your unconventional post.


I have no idea about culture jamming's effectiveness. I read the book "Culture Jam" by Kalle Lasn, head of Adbusters, and it was pretty horrible. My impression is that it fuels cynicism and dissent. I support its existence because I think different tactics work on different people.

That doesn't doom FAI research to eternal neglect, it just means FAI outreach people need to be cognizant of the fact that they're fighting an uphill battle toward persuasion that most outreach and marketing campaigns don't have to face.

I agree that FAI outreach is hard PR wise. Terminator did succeed in putting memes about an evil skynet into public consciousness but those memes and not really the ones we want even if they make some people opposed to AGI research.

The kinds of memes we want to push are more complex. I also don't know if we actually have decided which memes we want to push. I personally don't know enough about FAI to be confident in deciding which memes benefits the agenda of MIRI and FHI. If MIRI wants more PR the first step would be to articulate what kind of memes it actually wants to transmit to a broader public.

My impression is that it fuels cynicism and dissent. I support its existence because I think different tactics work on different people.

But we don't want "dissent". Cooperation in the Makerspaces that someone like Bre plays a large role are much better than dissent. Focusing on increasing dissent is pointless if you don't provide alternatives.

In the latter case, you may cover topics you believe to be more important, but your station will be too easy for people uninterested in those topics to ignore.

In the 21st century news sources such as the Economist and Foreign Policy that don't use pictures to illustrate their stories but write for a high level audience increased their subscriber while outlets that try to pander to everyone like the New York Times lost readership and had to lay off many journalists.

As far as written text goes the people who try to pander to everyone did mostly lose in the last decade. Mainstream media lost a lot of it's power over the last two decades. Getting a book recommend by Tim Ferriss in the Random show is much more valuable than getting a book recommended by the New York Time. Tim Ferriss recommendation might have more strength than anyone besides Oprah.

But even when we look at Oprah, does she try to pander to mainstream views in the usual sense of the word? I don't think she does. A lot of people don't like Oprah. It would be a losing move for Oprah to avoid talking about spirituality in a sense that makes some people hate her. If Oprah would go that way she would lose her base.

If you try to appeal to everyone you will appeal to no one.

If you don't want to make a TV station that fiances itself by selling advertising that program crap into people, I don't think it makes sense to even try to appeal to the mainstream when you start a new TV channel.

Oprah doesn't need everyone to like her. She wants the largest viewership possible. MIRI doesn't need everyone to support it. It wants the most supporters possible.

They don't need to appeal to everyone but they probably should appeal to a wider audience of people than they currently do (evidenced by the only ~10 FAI researches in the world) - and a different audience requires a different presentation of the ideas in order to be optimally effective.

I don't think pointing new people toward Less Wrong would be as effective as just creating a new pitch just for "ordinary people." Luke's Reddit AMA, Singularity 1-on-1 interview, and Facing the Singularity ebook were pretty good for this but it doesn't seem like many x-risk researchers have put much energy into marketing themselves to the broader public. (To be fair, in doing so, they might do more harm than good.)

The kinds of memes we want to push are more complex. I also don't know if we actually have decided which memes we want to push. I personally don't know enough about FAI to be confident in deciding which memes benefits the agenda of MIRI and FHI. If MIRI wants more PR the first step would be to articulate what kind of memes it actually wants to transmit to a broader public.

This was one of the suggestions in my post. :) Though I'm not sure it's possible to communicate about AI and only spread "complex" memes. I think about memes more in terms of positive and negative effects rather than in terms of their accuracy.

MIRI doesn't need everyone to support it. It wants the most supporters possible.

It don't think that's the case. MIRI cares a lot more about convincing the average AI researcher than it cares about convincing the average person who watches CNN.

If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He's not a luddite so why should he worry about UFAI?

If you look at environmental policy reducing mercury pollution and reducing CO2 emissions are both important priorities. If you just look at what's talked about in mainstream media you will find a focus on CO2 emissions. I think few people know how good the policy that the EPA policy under Obama about mercury pollution has been. The EPA did a really great move to reduce mercury pollution but it didn't hit major headlines.

The policy wasn't a result of a press campaign. It mostly happened silently in the background. On the other hand the fight about CO2 emissions is very intensive and the Obama administration didn't get much done on that front.

I think about memes more in terms of positive and negative effects rather than in terms of their accuracy.

That's the sort of thing that's better not said in public if you are actually serious about making an impact. If you want to say it say it in a way that takes a full paragraph of multiple sentences and that's not easily quoted by someone at gawker who writes an article about you in five years when you do have a public profile. Bonus points for using vobulary that allows people on LW to understand you express that idea but not the average person who reads a gawker article.

I also something that contradicts the goal you layed out above. You said you wanted to spread the meme: "Belief without evidence is bad." If you start pushing memes because you like the effect and not because they are supported by good evidence you don't get "Belief without evidence is bad."

If you start a PR campain about AI risk that results into bringing a lot of luddites into the AGI debate, it might be harder for MIRI to convince AI researchers to treat UFAI as a serious risk not easier because the average AI person might think how the luddites oppose AGI for all the wrong reasons. He's not a luddite so why should he worry about UFAI?

Fair enough. I still believe there could be benefits to gaining wider support but I agree that this is an area that will be mainly determined by the actions of elite specialized thinkers and the very powerful.

I also something that contradicts the goal you layed out above. You said you wanted to spread the meme: "Belief without evidence is bad." If you start pushing memes because you like the effect and not because they are supported by good evidence you don't get "Belief without evidence is bad."

I'm not sure I see a contradiction there. I can see that if I say things that aren't true and people believe them just because I said them, that would be believing without evidence. But "belief without evidence is bad" doesn't have to be true 100% of the time in order for it to be a good, safe meme to spread. If your argument is that the spreading of "Utility > Truth" interferes with "Belief without evidence is bad" so that the two will largely cancel out, then (1) I didn't include "Utility > Truth" on my incomplete list of safe memes precisely because I don't think it's safe and (2) the argument would only be persuasive if the two memes usually interfered with each other, which I don't think is the case. In most situations, people knowing the truth is a really desirable thing. Journalism and marketing are exceptions where it could make sense to oversimplify a message in order for laypeople to understand it, hence making the meme less accurate but more effective at getting people interested (in which case, they'll hopefully continue researching until they have a more accurate understanding). Also, (3) even if two memes contradict each other, using both in tandem could theoretically yield more utilons than using either one alone (or neither), though I'd expect examples to be rare.

By the way, I emailed Adbusters about if/how they measure the effectiveness of their culture jamming campaigns. I'll let you know when I get a response.

Racism is bad
Sexism is bad

(..)

Belief without evidence is bad
Defining one’s terms before an argument is good

Note the contradiction between the two sets of memes above. The first set of memes involve condemning a vaguely defined concept and frequently involve encouraging people to believe things, e.g., that race and sex don't correlate with anything significant, despite nearly all the evidence suggesting otherwise.

that race and sex don't correlate with anything significant

If you define “racism” and “sexism” that broadly, sure, but there are plenty of people who use the terms more narrowly, and wouldn't call David Epstein racist for pointing out that the Kalenjin comprise a sizeable fraction of marathon champions for genetic reasons.

[This comment is no longer endorsed by its author]Reply

David Epstein racist for pointing out that the Kalenjin comprise a sizeable fraction of marathon champions for genetic reasons.

What about calling people like Richard J. Herrnstein and Charles Murray racists for pointing out the correlation between race and IQ?

Also mbitton24's claim was that said memes were "almost impossible to misuse", and some people do in fact define the terms very broadly.

By the way what do you think are reasonable definitions of the terms?

Also mbitton24's claim was that said memes were "almost impossible to misuse", and some people do in fact define the terms very broadly.

Good point. Grandparent retracted.

Tapping out.

[-][anonymous]10y40

A useful option would be an education program that teaches critical thinking with the focus on bias in the media, and advertising tricks.

Even having high school students watch this 15 minute Media Watch) show each week would help increase the awareness.

I believe this is pretty standard in media, communications, and journalism-ish programs. They don't call it "critical thinking" but they are definitely clear about the bias and tricks.

the amount of food people eat correlates with the number of people they eat with

That's interesting. Does it correlate positively or negatively? And how strongly, with what other assumptions or conditions?

Some policy issues affected by media in democratic countries: Daniel Komo argues that people hear about trade policy (I imagine this is extensible to other kinds of policy) largely because oppositions have incentive to attack government trade initiatives. But because propagating information is expensive, often opponents will focus attacks on simpler, easier to explain policy decisions, rather then ones that are more complex, since efficient use of space is cheap. He concludes that democratic political competition may lead to what I might call a kind of "reverse" conjunction fallacy: simpler policy decision tend to get more prime-time, coverage, and critism than more complex decisions.

[-][anonymous]9y00

sd

[This comment is no longer endorsed by its author]Reply