All of Arbitrary's Comments + Replies

Ah, well paleontologists aren't exactly our target group.

If you target people likely to understand X-risk, they should have no more crazy sounding people than X-risk currently has, should they? Like IT/computer science people, other technical degrees? Sci-fi people perhaps? Any kind of technophile?

2MarsColony_in10years
Good points. The first 3 search terms I suggested were more biology related than paleontology, but the bulk were paleontology. Neither are terribly relevant fields, and I get the impression that interdisciplinary research is rare. I guess it's a judgement call as to how large the benefits might be to turn discussion of previous and current extinction events (super-volcanoes, asteroid impacts, ice-ages, etc) toward addressing future events (nuclear winter?). I'm not quite sure what disciplines would be optimum to target. Are there any talks on engineered pandemics that we might target toward epidemiologists? Perhaps making General AI researchers more aware of the risks would be beneficial, and Nick Bostrom does have a lovely TED talk and several talks at technical conferences on the topic. However, I haven't read enough in those areas to know what keywords might be used only by the experts.

It's great to have responses more thought out than one's original idea!

The people who would misunderstand existential risk, are you thinking it's better to leave them in the dark as long as possible so as to not disturb the early existential risk movement, or that they will be likelier to accept existential risk once there is more academic study? Or both? The downside of course is that without publicity you will have fewer resources and brains on the problem.

I agree it is best not to mention far future stuff. People are already familiar with nuclear war, e... (read more)

2MarsColony_in10years
I'm arguing "both", but mainly that we don't need those people who would misunderstand or misrepresent X-risk. People react against things they disagree with much more strongly than they react in favor of things they agree with. Consider 3 social movements: 1) a movement with 1000 reasonable-sounding people and 1 crazy sounding person. 2) a movement with 1000 reasonable-sounding people, 500 crazy sounding people I'm arguing that movement 2 will grow more slowly than 1, and will never become anywhere near as large. This is because new members will be very strongly turned off by seeing a movement that looks 1/3 crazy, even if they are slightly attracted to the non-crazy bits. If I wrote a script that inserted random YouTube-quality comments into LessWrong, you would get the strong impression that the community had slid into the gutter, and many people would probably leave, despite having precisely as many interesting and thoughtful comments as before. The crazier a movement looks on the surface, the harder it will be for academics to be taken seriously by their colleagues, and the fewer academics will be willing to risk their reputation by advocating or publishing on that topic. As for titles, you are probably right that most people will forget them immediately, and any impressions they form would be negligible. The search terms are mostly biological names for various extinction events throughout history, such as the one that killed the dinosaurs. I basically just skimmed through Wikipedia for obscure technical terms related to extinction.

You seem like a very down to earth guy, MarsColony_in10years :)

I'm not sure X-risk needs to be complicated though. The basics is just "Future technology may be dangerous and needs to be studied more". That should be enough to support the cause. One doesn't need to, and I don't think Bostrom does, go into the complicated things you mentioned.

The part in Bostrom's video where he talks about future people colonizing the galaxy and uploading themselves into computers and reach a post human condition should probably be cut for mainstream viewers, and maybe the expected utility calculations, other than that I don't see what could turn people off?

4MarsColony_in10years
That's a reasonably safe statement, but I can still see it misconstrued as * "Technology is bad." (sounds vaguely like a particular flavor of liberal flag waving, so some types of conservatives may react with pro-economic growth flag-waving) * "The end is nigh!" (sounds like panic-inducing hysteria) Even if the initial audience doesn't interpret it that way, that may be how they explain it to their friends. Preppers will bend it to fit and justify their narrative, and so will the all-natural types. That's just human nature. I just re-watched Nick Bostrom's "End of Humanity" TED talk, and am again impressed with his skill at presenting these things in the abstract without triggering any knee-jerk reactions. However, once it enters the public awareness, I expect these sorts of interpretations: * "20% chance of human extinction!" (Perhaps as a sensationalist headline somewhere.) * The idea that more people is better is extremely counter-intuitive for many, especially given the planet's current overpopulation. Many people have overgeneralized this heuristic. It took me many months of consideration before I eventually came around to Bostrom's way of thinking, that future lives should be weighted equally to our own. Never being born just doesn't feel as bad as death, until you get into the philosophical details. I've talked to some that would argue that humans are so destructive to nature that the earth would be better without people, and so actively advocated against things like space colonization, even as a backup plan. Of course, this is more of belief in belief than actual belief, since they would never actually take steps toward human extinction. * The idea of colonizing the universe is repulsive to some, who tend to argue that we shouldn't even consider spreading to other planets until we can fix all the problems we have here first. They get a mental image of humans exhausting all natural resources in reach, and destroying pristine planets. (Running out

I was thinking using Bostroms Ted talk, if that is succesful you can consider making an ad. The adblocker point is interesting, could be polled.

I think that would vary too much depending on the video to make a meaningful comparison. Better to compare 0.2$ to the oppurtunity costs of word of mouth and other methods of spreading existential risk awareness, isn't it?

Has anyone tried advertising existential risk?

Bostroms "End of Humanity" talk for instance.

It costs about 0.2 $ per view for a video ad on YouTube, so if 0.2% of viewers give an average of 100 $ it would break even. Hopefully people would give more than that.

You can target ads to groups likely to give much by the way, like the highly educated

2Stingray
Youtube ads are very annoying. If someone's first acquaintance with existential risk was through youtube ads, he would get a very bad first impression.
0Elo
I agree with Sherincall that there is a smaller number than 0.2%. try do the numbers on 0.00002% of people donating. see if they still work.
5Sherincall
Not sure if it has been tried before, but I don't think your calculations are complete. For example: * There is a significant investment to actually make the ad. It needs to be done professionally, if you are hoping to attract large donations. * Assuming 1/500 viewers will donate $100 seems very optimistic. Maybe if it is targeted properly, but then you will have a really small number of viewers, not enough to justify the investment cost. * Willingness to donate is likely correlated with the use of an Ad Blocker (conclusion extrapolated from a small sample) * There may a be PR hit when you are associated with youtube ads I'd think the better approach is to get more public figures to endorse the goal. Not necessarily the likes Musk and Gates, but lower profile youtube folk. Few examples off the top of my head: Wil Wheaton, Tim Minchin, ViHart, LinusTechTips, etc.