According to Dale Carnegie, You Can't Win an Argument—and He Has a Point
Related to: Two Kinds of Irrationality and How to Avoid One of Them
When I was a teenager, I picked up my mom's copy of Dale Carnegie's How to Win Friends and Influence People. One of the chapters that most made an impression on me was titled "You Can't Win an Argument," in which Carnegie writes:
Nine times out of ten, an argument ends with each of the contestants more firmly convinced than ever that he is absolutely right.
You can’t win an argument. You can’t because if you lose it, you lose it; and if you win it, you lose it. Why? Well, suppose you triumph over the other man and shoot his argument full of holes and prove that he is non compos mentis. Then what? You will feel fine. But what about him? You have made him feel inferior. You have hurt his pride. He will resent your triumph. And -
"A man convinced against his will
"Is of the same opinion still."
In the next chapter, Carnegie quotes Benjamin Franklin saying how he had made it a rule never to contradict anyone. Carnegie approves: he thinks you should never argue with or contradict anyone, because you won't convince them (even if you "hurl at them all the logic of a Plato or an Immanuel Kant"), and you'll just make them mad at you.
It may seem strange to hear this advice cited on a rationalist blog, because the atheo-skeptico-rational-sphere violates this advice on a routine basis. In fact I've never tried to follow Carnegie's advice—and yet, I don't think the rationale behind it is completely stupid. Carnegie gets human psychology right, and I fondly remember reading his book as being when I first really got clued in about human irrationality.
Wait vs Interrupt Culture
At the recent CFAR Workshop in NY, someone mentioned that they were uncomfortable with pauses in conversation, and that got me thinking about different conversational styles.
Growing up with friends who were disproportionately male and disproportionately nerdy, I learned that it was a normal thing to interrupt people. If someone said something you had to respond to, you’d just start responding. Didn’t matter if it “interrupted” further words – if they thought you needed to hear those words before responding, they’d interrupt right back.
Occasionally some weird person would be offended when I interrupted, but I figured this was some bizarre fancypants rule from before people had places to go and people to see. Or just something for people with especially thin skins or delicate temperaments, looking for offense and aggression in every action.
Then I went to St. John’s College – the talking school (among other things). In Seminar (and sometimes in Tutorials) there was a totally different conversational norm. People were always expected to wait until whoever was talking was done. People would apologize not just for interrupting someone who was already talking, but for accidentally saying something when someone else looked like they were about to speak. This seemed totally crazy. Some people would just blab on unchecked, and others didn’t get a chance to talk at all. Some people would ignore the norm and talk over others, and nobody interrupted them back to shoot them down.
But then a few interesting things happened:
1) The tutors were able to moderate the discussions, gently. They wouldn’t actually scold anyone for interrupting, but they would say something like, “That’s interesting, but I think Jane was still talking,” subtly pointing out a violation of the norm.
2) People started saying less at a time.
#1 is pretty obvious – with no enforcement of the social norm, a no-interruptions norm collapses pretty quickly. But #2 is actually really interesting. If talking at all is an implied claim that what you’re saying is the most important thing that can be said, then polite people keep it short.
With 15-20 people in a seminar, this also meant that people rarely tried to force the conversation in a certain direction. When you’re done talking, the conversation is out of your hands. This can be frustrating at first, but with time, you learn to trust not your fellow conversationalists individually, but the conversation itself, to go where it needs to. If you haven’t said enough, then you trust that someone will ask you a question, and you’ll say more.
When people are interrupting each other – when they’re constantly tugging the conversation back and forth between their preferred directions – then the conversation itself is just a battle of wills. But when people just put in one thing at a time, and trust their fellows to only say things that relate to the thing that came right before – at least, until there’s a very long pause – then you start to see genuine collaboration.
And when a lull in the conversation is treated as an opportunity to think about the last thing said, rather than an opportunity to jump in with the thing you were holding onto from 15 minutes ago because you couldn’t just interrupt and say it – then you also open yourself up to being genuinely surprised, to seeing the conversation go somewhere that no one in the room would have predicted, to introduce ideas that no one brought with them when they sat down at the table.
By the time I graduated, I’d internalized this norm, and the rest of the world seemed rude to me for a few months. Not just because of the interrupting – but more because I’d say one thing, politely pause, and then people would assume I was done and start explaining why I was wrong – without asking any questions! Eventually, I realized that I’d been perfectly comfortable with these sorts of interactions before college. I just needed to code-switch! Some people are more comfortable with a culture of interrupting when you want to, and accepting interruptions. Others are more comfortable with a culture of waiting their turn, and courteously saying only one thing at a time, not trying to cram in a whole bunch of arguments for their thesis.
Now, I’ve praised the virtues of wait culture because I think it’s undervalued, but there’s plenty to say for interrupt culture as well. For one, it’s more robust in “unwalled” circumstances. If there’s no one around to enforce wait culture norms, then a few jerks can dominate the discussion, silencing everyone else. But someone who doesn’t follow “interrupt” norms only silences themselves.
Second, it’s faster and easier to calibrate how much someone else feels the need to talk, when they’re willing to interrupt you. It takes willpower to stop talking when you’re not sure you were perfectly clear, and to trust others to pick up the slack. It’s much easier to keep going until they stop you.
So if you’re only used to one style, see if you can try out the other somewhere. Or at least pay attention and see whether you’re talking to someone who follows the other norm. And don’t assume that you know which norm is the “right” one; try it the “wrong” way and maybe you’ll learn something.
Cross-posted at my personal blog.
Buying Debt as Effective Altruism?
http://www.theguardian.com/world/2013/nov/12/occupy-wall-street-activists-15m-personal-debt
A collection of Occupy activists recently bought over $14,000,000 in personal debt for $400,000.
Normally, debt-buying companies do this with the intention of collecting the money from the debtors--Occupy did not, and I was struck by the lopsidedness of the figures.
A number I see often in the high-impact philanthropy world is $2300 to save a life (with plenty of caveats). At Occupy's rates, that would buy roughly $80,000 in debt--enough to get two or three families out of a hole that would otherwise render them bankrupt.
By itself, this isn't enough to be better than mosquito nets or deworming. But the thing about personal debt is that, thanks to interest payments and stress, it prevents people with high earning potential (compared to an average African) from making decisions that would optimal were they debt-free--like finishing college or buying a used car so they can take on a higher-paying job.
My idea, though it's a tentative, spur-of-the-moment thing:
Why not found a charity that acts like a combination of Vittana and Giving What We Can, freeing people with good prospects from debt in exchange for their signing a contract to donate a small portion of their future salary to charity?
A few issues that come to mind:
1) Occupy bought a lot of medical debt, which this company wouldn't, and other types of debt might be harder to buy.
2) People who have decent earning potential have more valuable debt, since they're more likely to pay it off later. (On the other hand, freeing them of interest payments might help them get into a better position for repayment.)
3) The idea is a lot like micro-lending, and organizations that offer that service don't have a great track record (though some have been successful).
4) People just freed from debt might not be in a position to donate much salary/might be unreliable. (Deferred payments until college is finished/the new job is had could be helpful here.)
5) There might be (well, almost certainly are) difficult legal issues with finding information on people in debt before you actually own their debt.
Are there any other obstacles you all can think of? Other features of the charity that might make it more effective? How does it sound as an intervention that increases the world's productivity in the long run, stacked up against other such interventions?
How to have high-value conversations
Since I moved into the Boston rationalist house, I've found myself having an overwhelming amount of conversation compared to my previous baseline. The conversations at Citadel tend to be fairly intellectual and interesting, but there is a lot of topic drift and tendency for entertainment over depth, which seems to be a fairly common pitfall. How can we optimize conversations and direct them towards areas of usefulness and insight?
There have been some previous discussions on this topic on LW, e.g. on useful ways to avoid low-value conversations or steer out of them. I would like to focus on the complementary skill of stimulating high-value directions in a conversation.
First of all, what makes a conversation high-value? There are several possible metrics:
- people learning from each other’s expertise and experience
- people getting to know each other better
- exchange of advice and feedback
- generating ideas and insights
All of these involve increasing the total amount of information available to the participants, either through revealing information that is already there, or through creating new information. This is more likely to happen in a topic area where someone has strong opinions or expertise, or, on the other hand, an area that someone finds challenging where they stand to learn a lot.
One effective way to steer a conversation is through asking purposeful questions. The questions should have sufficient depth to lead to interesting answers, but not be vague or put the other person on the spot. In that sense, a question like “What have you been thinking about lately?” is better than “What do you care about?” or “What are you terminal goals?”. It is better if the question leaves a line of retreat and doesn’t make the person feel low status if they don’t have an answer.
The types of questions that are productive and comfortable are generally different for group and one-on-one conversations. Two-person conversations are more conducive to openness, so one would be able to ask personal questions like
- what memes have affected you strongly in the past or shaped your beliefs?
- what has been important to you lately?
- what has been difficult for you lately?
- what eccentric things have you done?
Some questions are likely to lead to interesting topics in an N-person conversation for any N:
- what have you learned recently?
- what surprised you about experience X?
- what have you been reading?
- who are your role models?
- I have been confused about X, does anyone have advice?
It is generally harder to steer a group conversation in productive directions than a two-person conversation, but the payoff is higher as well, since more people’s time is at stake. Since a single person has less influence in a group conversation, it’s important to use it well. Sometimes the most useful thing to do in a group conversation is to split it into smaller conversations. Asking someone about a subject that only they are likely to be interested in might be considered impolite to the others, but often leads to better separate conversations for everyone involved.
Questions do have limitations as a conversation tactic, and can sometimes result in awkward silence or a string of brief uninformative replies. If this happens, it’s handy to be prepared to answer your own question, which might inspire others to answer it as well. It is generally a good idea to have something that you’d like to talk about, perhaps something you've been working on or a concept that puzzles you, that you can bring up independently of whether and how people respond to your questions. Thinking in advance of topics to discuss with specific people is especially useful, e.g. relating to their past experiences or skill areas.
Do people have advice or good examples of directing conversations? Recalling the best conversations you've ever had, what made them happen?
AI Policy?
Here's a question: are there any policies that could be worth lobbying for to improve humanity's chances re: AI risk?
In the near term, it's possible that not much can be done. Human-level AI still seems a long ways off (and it probably is), making it both hard to craft effective policy on, and hard to convince people it's worth doing something about. The US government currently funds work on what it calls "AI" and "nanotechnology," but that mostly means stuff that might be realizable in the near-term, not human-level AI or molecular assemblers. Still, if anyone has ideas on what can be done in the near term, they'd be worth discussing.
Furthermore, I suspect that as human-level AI gets closer, there will be a lot the US government will be able to do to affect the outcome. For example, there's been talk of secret AI projects, but if the US gov got worried about those, I suspect they'd be hard to keep secret from a determined US gov, especially if you believe (as I do) that larger organizations will have a much better shot at building AI than smaller ones.
The lesson of Snowden's NSA revelations seems to be that, while in theory there are procedures humans can use to keep secrets, in practice humans are so bad at implementing those procedures that secrecy will fail against a determined attacker. Ironically, this applies both to the government and everyone the government has spied on. However, the ability of people outside the US gov to find out about hypothetical secret government AI projects seems less predictable, dependent on decisions of individual would-be leakers.
And it seems like, as long as the US government is aware of an AI project, there will be a lot it will be able to do to shut the project down if desired. For foreign projects, there will be the possibility of a Stuxnet-style attack, though the government might be reluctant to do that against a nuclear power like China or Russia (or would it?) However, I expect the US to lead the world in innovation for a long time to come, so I don't expect foreign AI projects to be much of an issue in the early stages of the game.
The real issue is US gov vs. private US groups working on AI. And there, given the current status quo for how these things work in the US, my guess is that if the government ever became convinced that an AI project was dangerous, they would find some way to shut it down citing "national security" and basically that would work. However, I can see big companies with an interest in AI lobbying the government to make that not happen. I can also see them deciding to pack their AI operations off to Europe or South Korea or something.
And on top of all this is simply the fact that, if it becomes convinced that AI is important, the US government has a lot of money to throw at AI research.
These are just some very hastily sketched thoughts, don't take them too seriously, and there's probably a lot more that can be said. I do strongly suspect, however, that people who are concerned about risks from AI ignore the government at our peril.
[Prize] Essay Contest: Cryonics and Effective Altruism
I'm starting a contest for the best essay describing why a rational person of a not particularly selfish nature might consider cryonics an exceptionally worthwhile place to allocate resources. There are three distinct questions relating to this, and you can pick any one of them to focus on, or answer all three.
Contest Summary:
- Essay Topic: Cryonics and Effective Altruism
- Answers at least one of the following questions:
- Why might a utilitarian seeking to do the most good consider contributing time and/or money towards cryonics (as opposed to other causes)?
- What is the most optimal way (or at least, some highly optimal, perhaps counterintuitive way) to contribute to cryonics?
- What reasons might a utilitarian have for actually signing up for cryonics services, as opposed to just making a charitable donation towards cryonics (or vice versa)?
- Length: 800-1200 words
- Target audience: Utilitarians, Consequentialists, Effective Altruists, etc.
- Prize: 1 BTC (around $350, at the moment)
- Deadline: Sunday 11/17/2013, at 8:00PM PST
To enter, post your essay as a comment in this thread. Feel free to edit your submission up until the deadline. If it is a repost of something old, a link to the original would be appreciated. I will judge the essays partly based on upvotes/downvotes, but also based on how well it meets the criteria and makes its points. Essays that do not directly answer any of the three questions will not be considered for the prize. If there are multiple entries that are too close to call, I will flip a coin to determine the winner.
Terminology clarification: I realise that for some individuals there is confusion about the term 'utilitarian' because historically it has been represented using very simple, humanly unrealistic utility functions such as pure hedonism. For the purposes of this contest, I mean to include anyone whose utility function is well defined and self-consistent -- it is not meant to imply a particular utility function. You may wish to clarify in your essay the kind of utilitarian you are describing.
Regarding the prize: If you win the contest and prefer to receive cash equivalent via paypal, this wll be an option, although I consider bitcoin to be more convenient (and there is no guarantee how many dollars it will come out to due to the volatility of bitcoin).
Contest results
Why didn't people (apparently?) understand the metaethics sequence?
There seems to be a widespread impression that the metaethics sequence was not very successful as an explanation of Eliezer Yudkowsky's views. It even says so on the wiki. And frankly, I'm puzzled by this... hence the "apparently" in this post's title. When I read the metaethics sequence, it seemed to make perfect sense to me. I can think of a couple things that may have made me different from the average OB/LW reader in this regard:
- I read Three Worlds Collide before doing my systematic read-through of the sequences.
- I have a background in academic philosophy, so I had a similar thought to Richard Chapell's linking of Eliezer's metaethics to rigid designators independently of Richard.
Bayesianism for Humans
Recently, I completed my first systematic read-through of the sequences. One of the biggest effects this had on me was considerably warming my attitude towards Bayesianism. Not long ago, if you'd asked me my opinion of Bayesianism, I'd probably have said something like, "Bayes' theorem is all well and good when you know what numbers to plug in, but all too often you don't."
Now I realize that that objection is based on a misunderstanding of Bayesianism, or at least Bayesianism-as-advocated-by-Eliezer-Yudkowsky. "When (Not) To Use Probabilities" is all about this issue, but a cleaner expression of Eliezer's true view may be this quote from "Beautiful Probability":
No, you can't always do the exact Bayesian calculation for a problem. Sometimes you must seek an approximation; often, indeed. This doesn't mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atom-by-atom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs.
The practical upshot of seeing Bayesianism as an ideal to be approximated, I think, is this: you should avoid engaging in any reasoning that's demonstrably nonsensical in Bayesian terms. Furthermore, Bayesian reasoning can be fruitfully mined for heuristics that are useful in the real world. That's an idea that actually has real-world applications for human beings, hence the title of this post, "Bayesianism for Humans."
Existential Risk II
Meta
-This is not a duplicate of the original less wrong x-risk primer. I like lukeprog's article just fine, but it works mostly as a punch in the gut for anyone who needs a wake up call. Very little of the actual research on x-risk is discussed in that article, so the gap that was there before it was published was largely there after. My article and his would work well being read together.
-This was originally written to accompany a presentation I gave, hence the random inclusion of both hyperlinks and citations. It also lives, with minor differences, here.
-Summary: For various reasons the future is scarier than a lot of people realize. All sorts of things could lead to the destruction of the human species, ranging from asteroid impacts to runaway AIs, and these things are united by the fact that any one of them could destroy the value of the future from a human perspective. The dangers can be separated into bangs (very sudden extinction), crunches (not fatal but crippling), shrieks (mostly curse with a little blessing), and whimpers (a long, slow fading), though there is nothing sacred about these categories. Some humans have are trying to prevent this, though their methods are still in their infancy. Much more should be done to support them.
In the beginning
I want to start this off with a quote, which nicely captures both how I use to feel about the idea of human extinction and how I feel about it now:
I think many atheists still trust in God. They say there is no God, but …[a]sk them how they think the future will go, especially with regards to Moral Progress, Human Evolution, Technological Progress, etc. There are a few different answers you will get: Some people just don’t know or don’t care. Some people will tell you stories of glorious progress… The ones who tell stories are the ones who haven’t quite internalized that there is no god. The people who don’t care aren’t paying attention. The correct answer is not nervous excitement, or world-weary cynicism, it is fear. -Nyan Sandwich
Back when I was a Christian I gave some thought to the rapture, which is not entirely unlike extinction as far as most ten-year-olds can tell. Sometime during this period I found a slim little book of fiction which portrayed a damned soul's experience of burning in hell forever, and that did scare me. Such torment, as luck would have it, is easy enough to avoid if you just call god the right name and ask forgiveness often enough.
When I was old enough to contemplate possible secular origins of the apocalypse, I was both an atheist and one of the people who tell glorious stories about the future. The potential fruits of technological development, from the end of aging to the creation of a benevolent super-human AI, excited me, and still excite me now. No doubt I would've admitted the possibility of human extinction, I don't really remember. But there wasn't the kind of internal siren that should go off when you start thinking seriously about one of the Worst Possible Outcomes. That I would remember.
But as I've gotten older I've come to appreciate that most of us are not afraid enough of the future. Those who are afraid, are often afraid for the wrong reasons.
What is an Existential Risk?
An existential risk or x-risk (to use a common abbreviation) is "...one that threatens to annihilate Earth-originating intelligent life or permanently and drastically to curtail its potential" (Bostrom 2006). The definition contains some subtlety, as not all x-risks involve the outright death of every human. Some could take potentially eons to complete, and some are even survivable. Positioning x-risks within the broader landscape of risks yields something like this chart:
At the top right extreme is where Cthulu sleeps. They are risks that carry the potential to drastically and negatively affect this and every subsequent human generation. So as not to keep everyone in suspense, let's use this chart to put a face on the shadows.
Four Types of Existential Risks
Philosopher Nick Bostrom has outlined four broad categories of x-risk. In more recent papers he hasn't used the terminology that I'm using here, so maybe he thinks the names are obsolete. I find them evocative and useful, however, so I'll stick with them until I have a reason to change.
Bangs are probably the easiest risks to conceptualize. Any event which causes the sudden and complete extinction of humanity would count as a Bang. Think asteroid impacts, supervolcanic eruptions, or intentionally misused nanoweapons.
Crunches are risks which humans survive but which leaves us permanently unable to navigate to a more valuable future. An example might be depleting our planetary resources before we manage to build the infrastructure needed to mine asteroids or colonize other planets. After all the die-offs and fighting, some remnant of humanity could probably survive indefinitely, but it wouldn't be a world you'd want to wake up in.
Shrieks occur when a post-human civilization develops but only manages to realize a small amount of its potential. Shrieks are very difficult to effectively categorize, and I'm going to leave examples until the discussion below.
Whimpers are really long-term existential risks. The most straight forward is the heat death of the universe; within our current understanding of physics, no matter how advanced we get we will eventually be unable to escape the ravages of entropy. Another could be if we encounter a hostile alien civilization that decides to conquer us after we've already colonized the galaxy. Such a process could take a long time, and thus would count as a whimper.
Just because whimpers are so much less immediate than other categories of risk and x-risk doesn't automatically mean we can just ignore them; it has been argued that affecting the far future is one of the most important projects facing humanity, and thus we should take the time to do it right.
Sharp readers will no doubt have noticed that there is quite a bit of fuzziness to these classifications. Where, for example, should we put all-out nuclear war, the establishment of an oppressive global dictatorship, or the development of a dangerous and uncontrollable superintelligent AI? If everyone dies in the war it counts as a bang, but if it makes a nightmare of the biosphere while leaving a good fraction of humanity intact it would be a crunch. A global dictatorship wouldn't be an x-risk unless it used some (probably technological) means to achieve near-total control and long-term stability, in which case it would be a crunch. But it isn't hard to imagine such a situation in which some parts of life did get better, like if a violently oppressive government continued to develop advanced medicines so that citizens were universally healthier and longer-lived than people today. If that happened, it would be a Shriek. A similar analysis applies to the AI, with the possible outcomes being Bang, Crunch, and Shriek depending on just how badly we misprogrammed it.
What Ties These Threads Together?
Even if you think existential threats deserve more attention, the rationale for treating them as a diverse but unified phenomenon may not be obvious. In addition to the crucial but (relatively) straightforward work of, say, tracking Near-Earth Objects (NEOs), existential risk researchers also think seriously about alien invasions and rogue AIs. With such a range of speculativeness, why group x-risks together at all?
It turns out that they share a cluster of features which does give them some cohesion and make them worth studying under a single label, not all of which I discuss here. First and most obvious is that should any of them occur the consequences would be truly vast relative to any other kind of risk. To see why, think about the difference between a catastrophe that kills 99% of humanity and one that kills 100%. As big a tragedy as the former would be, there's a chance humans could recover and build a post-human civilization. But if every person dies, then the entire value of our future is lost (Bostrom 2013).
Second, these are not risks which admit of a trial and error approach. Pretty much by definition a collision with an x-risk will spell doom for humanity, and so we must be more proactive in our strategies for reducing them. Related to this, we as a species have neither the cultural nor biological instincts needed to prepare us for the possibility of extinction. A group of people might live through several droughts and thus develop strong collective norms towards planning ahead and keeping generous food reserves. But they cannot have gone extinct multiple times, and thus they can't rely on their shared experience and cultural memory to guide them in the future. I certainly hope we can develop a set of norms and institutions which makes us all safer, but we can't wait to learn from history. We're going to have to start well in advance, or we won't survive.
A final commonality I'll mention is that the solutions to quite a number of x-risks are themselves x-risks. A powerful enough government could effectively halt research into dangerous pathogens or nano-replicators. But given how States have generally comported themselves in the past, one would do well to be cautious before investing them with that kind of power. Ditto for a superhuman AI, which could set up an infrastructure to protect us from asteroids, nuclear war, or even other less Friendly AI. Get the coding just a little wrong, though, and it might reuse your carbon to make paperclips.
It is indeed a knife edge along which we creep towards the future.
Measuring the Monsters
A first step is getting straight about how likely survival is. The reader may have encountered predictions of the "we have only a 50% chance of surviving the next hundred years" variety. Examining the validity of such estimates is worth doing, but I won't be taking up that challenge here; I tend to agree that these figures involves a lot of subjective judgement, but that even if the chances were very very small it would still be worth taking seriously (Bostrom 2006). At any rate, it seems to me that trying to calculate an overall likelihood of human extinction is going to be premature before we've nailed down probabilities for some of the different possible extinction scenarios. It is to the techniques which x-risk researchers rely on to try and do this that I now turn.
X-risk-assessments rely on both direct and indirect methods (Bostrom 2002). Using a direct method involves building a detailed causal model of the phenomenon and using that to generate a risk probability, while indirect methods include arguments, thought experiments, and information that we use to constrain and refine our guesses.
As far as I know for some x-risks we could use direct methods if we just had a way to gather the relevant information. If we knew where all the NEOs were we could use settled physics to predict whether any of them posed a threat and then prioritize accordingly. But we don't where they all are, so we might instead examine the frequency of impacts throughout the history of the Earth and then reason about whether or not we think an impact will happen soon. It would be nice to exclusively use direct methods, but we supplement with indirect methods when we can't, and of course for x-risks like AI we are in an even more uncertain position than we are for NEOs.
The Fermi Paradox
Applying indirect methods can lead to some strange and counter-intuitive territory, an example of which is the mysteries surrounding the Fermi Paradox. The central question is: in a universe with so many potential hotbeds of life, why is it that when we listen for stirring in the void all we hear is silence? Many feel that the universe must be teeming with life, some of it intelligent, so why haven't we see any sign of it yet?
Musing about possible solutions to the Fermi Paradox can be a lot of fun, and it's worth pointing out that we haven't been looking that long or that hard for signals yet. Nevertheless I think the argument has some meat to it.
Observing this state of affairs, some have postulated the existence of at least one Great Filter, a step in the chain of development from the first organisms to space-faring civilizations that must be extremely hard to achieve.
This is cause for concern because the Great Filter could be in front of us or behind us. Let me explain: imagine a continuum with the simplest self-replicating molecules on one side and the Star Trek Enterprise on the other. From our position on the continuum we want to know whether or not we have already passed one of the hardest steps, but we have only our own planet to look at. So imagine that we send out probes to thousands of different worlds in the hopes that we will learn something.
If we find lots of simple eukaryotes that means that the Great Filter is probably not before the development of membrane-bound organelles. The list of possible places on the continuum the Great Filter could be shrinks just a little bit. If instead we find lots of mammals and reptiles (or creatures that are very different but about as advanced), that means the Great Filter is probably not before the rise of complex organisms, so the places the Great Filter might be hiding shrinks again. Worst of all would be if we find the dead ruins of many different advanced civilizations. This would imply that the real killer is yet to come, and we will almost certainly not survive it.
As happy as many people would be to discover evidence of life in the universe, a case has been made that we should hope to find only barren rocks waiting for us in the final frontier. If not even simple bacteria evolve on most worlds, then there is still a chance that the Great Filter is behind us, and we can worry only about the new challenges ahead, which may or not be Filters as great as the ones in the past.
If all this seems really abstract out there, that's because it is. But I hope it is clear how this sort of thinking can help us interpret new data, make better guesses, form new hypotheses, etc. When dealing with stakes this high and information this limited, one must do the best they can with what's available.
Mitigation
What priority should we place on reducing existential risk and how can we do that? I don't know of anyone who thinks all our effort should go towards mitigating x-risks; there are lots of pressing issues which are not x-risks that are worth our attention, like abject poverty or geopolitical instability. But I feel comfortable saying we aren't doing nearly as much as we should be. Given the stakes and the fact that there probably won't be a second chance we are going to have to meet x-risks head on and be aggressively proactive in mitigating them.
Suppose we taboo 'aggressively proactive', what's left? Well the first step, as it so often is, will be just to get the right people to be aware of the problem (Bostrom 2002). Thankfully this is starting to be the case as more funding and brain power go into existential risk reduction. We have to get to a point where we are spending at least as much time, energy, and effort making new technology safe as we do making it more powerful. More international cooperation on these matters will be necessary, and there should be some sort of mechanism by which efforts to develop existentially-threatening technologies like super-virulent pathogens can be stopped. I don't like recommending this at all, but almost anything is preferable to extinction.
In the meantime both research that directly reduces x-risk (like NEO detection), as well as research that will help elucidate deep and foundational issues in x-risk (FHI and MIRI) should be encouraged. It's a stereotype that research papers always end with a call for more research, but as was pointed out by lukeprog in a talk he gave, there's more research done on lipstick than on friendly AI. This generalizes to x-risk more broadly, and represents the truly worrying state of our priorities.
Conclusion
Though I maintain we should be more fearful of what's to come, that should not obscure the fact that the human potential is vast and truly exciting. If the right steps are taken, we and our descendants will have a future better than most can even dream of. Life spans measured in eons could be spent learning and loving in ways our terrestrial languages don't even have words for yet. The vision of a post-human civilization flinging it's trillions of descendants into the universe to light up the dark is tremendously inspiring. It's worth fighting for.
But we have much work ahead of us.
Requesting clarification- On the Metaethics
My apologies if this doesn't deserve a Discussion post, but if this hasn't been addresed anywhere than it's clearly an important issue.
There have been many defences of consequentialism against deontology, including quite a few on this site. What I haven't seen, however, is any demonstration of how deontology is incompatible with the ideas in Elizier's Metaethics sequence- as far as I can tell, a deontologist could agree with just about everything in the Sequences.
Said deontologist would argue that, to the extent a human universial morality can exist through generalised moral instincts, said instincts tend to be deontological (as supported through scientific studies- a study of the trolley dilemna v.s the 'fat man' variant showed that people would divert the trolley but not push the fat man). This would be their argument against the consequentialist, who they could accuse of wanting a consequentialist system and ignoring the moral instincts at the basis of their own speculations.
I'm not completely sure about this, but figure it an important enough misunderstanding if I indeed misunderstood to deserve clearing up.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)