This is actually a reasonable strategy. A pre-comitment to revenge is useful, but there's no point getting revenge on nature.
I suppose that works for pre-scientific, pre-rational thinking: back when you couldn't do a thing about nature, but you could do a thing about that schmuck looking at you funny.
However, now, as humanity's power grows, we can actually do something about nature: we can learn to predict earthquakes, build structures strong enough against calamity, vaccinate against pestilence, etc etc.
So the bias, I suppose, arises from evolution being too slow for human progress.
I think you're missing Eugine's point.
Consider someone that may or may not rape your daughter - the probability that he does so is function of how likely it is you'll spend your day and nights hunting him down in order to slowly torture him to death, with no concern for law or personal safety.
Consider an earthquake that may or may not destroy your house. The probability that it does so is independent of what you precommit to doing afterwards.
Sure, in both cases we can prevent the tragedy through other ways, but that's not the main issue.
(Edit: Oops, thanks Eugine)
worry more about future catastrophes as a result of malevolent agents than as a result of unplanned events.
Which makes sense to me, since the universe isn't out to get you, but malevolent agents are.
If the uncaring universe represents a greater level of preventable threat than malevolent agents, does it really matter?
Depends how easy the threat is to prevent. It's much easier to swear vendetta than it is to engineer flood barriers and quakeproof buildings.
The uncaring universe may happen to be a greater threat, but the malevolent agent is trying to a be a threat, it's targeting you.
Yes, but if the fact that it's trying to be a threat doesn't make it as great a one, why should it take priority? The fact that they're trying to be a threat is what makes them a threat at all; they probably wouldn't be one otherwise.
Is there any additional utility in stopping malevolent agents from causing the exact same amount of harm as nonmalevolent ones? I don't see why there should be.
As others have pointed out, malevolent agents can be signaled by revenge.
Malevolent agents have a preference for harming you. Malevolent agents probably have some form of intelligence, so that they can get better at harming you.
If you're doing a real calculation, it's marginal future harm reduction minus response cost with some time discount function. Obviously, there's no guarantee that you should choose to respond to the malevolent agent threat over the uncaring universe threat. The factors indicated are all of the "all other things being equal" sort.
I'll give you factors in favor of fighting the uncaring universe - those threats won't be signaled away, and likely have more universal application in time and space. Fighting malevolent agents takes care of this agent today. There will be more tomorrow. Overcoming the inconveniences of gravity pays dividends forever. Hail Science!
The thought occurred to me while watching Sherlock (as kindly recommended by others here). If Sherlock and Moriarty are so "bored" with the challenges presented by their simian neighbors, why don't they fight Death or engage in some other science project to make themselves useful? If they're such smarty boys, why don't they take on the Universe instead of slightly evolved primates?
Malevolent agents have a preference for harming you. Malevolent agents probably have some form of intelligence, so that they can get better at harming you.
In practice, unless it's in the case of an actual war though, they usually don't. Even if they're not responded to with swift action, gangs and murderers and so on generally will generally not evolve into supergangs and mass murderers.
The fact that malevolent entities can take countermeasures against being thwarted though, will tend to decrease the marginal utility of an investment in trying to stop them. Say that you try to keep weapons out of the hands of criminals, but they change means of getting their hands on weapons and only become slightly less well armed on average. If you were faced by another, nonsentient threat, which caused as much harm on average, but wouldn't take countermeasures against your attempts to resist it, you'd be likely to get much better results by trying to address that problem.
Of course, sometimes other thinking agents do pose a higher priority threat, and the fact that they respond to signalling and game theory incentives can tip the scales in favor of addressing them over other threats, but that doesn't mean that we evaluate those factors in anything close to a rational manner.
The coincidence of this being rerun one week after a major school shooting is... so remarkable that I'm surprised no one had noted it yet.
I don't see how it's remarkable. Someone has to decide which sequence articles to rerun when, it's not as if the sequence reruns are random and independent of human intervention.
This bias seems a lot like hanlon's razor. Perhaps it's just a consequence of human intelligence growing: as one becomes smarter, one expects the rest of the world as well (lest one considers oneself a statistical outliar), and one begins to fear the misuse of that intelligence by others.
Today's post, The Bad Guy Bias was originally published on December 9, 2008. A summary:
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was True Sources of Disagreement, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.