"Animals bigger than me" are dangerous once you've encountered them up close, but normally there's no reason to do so unless you're hunting them. The total life risk of "being hurt by a carnivore" is much greater than the total life risk of "being hurt by an animal bigger than me".
This is true both today and in prehistoric environments: most of the predators who tend to tangle with humans aren't much bigger than us - snakes and leopards, mostly. OTOH, predators who are much bigger than humans don't routinely hunt humans (tigers, lions). (Although tigers may have done so long ago??? I don't really know.)
Hippopotamuses are the most dangerous mammals in Africa, and they are much bigger than humans.
Note that its closest competitor is the Cape Buffalo. Also bigger than humans.
I really liked Robin's point that mainstream scientists are usually right, while contrarians are usually wrong. We don't need to get into details of the dispute - and usually we cannot really make an informed judgment without spending too much time anyway - just figuring out who's "mainstream" lets us know who's right with high probability. It's type of thinking related to reference class forecasting - find a reference class of similar situations with known outcomes, and we get a pretty decent probability distribution over possible outcomes.
Unfortunately deciding what's the proper reference class is not straightforward, and can be a point of contention. If you put climate change scientists in the reference class of "mainstream science", it gives great credence to their findings. People who doubt them can be freely disbelieved, and any arguments can be dismissed by low success rate of contrarianism against mainstream science.
But, if you put climate change scientists in reference class of "highly politicized science", then the chance of them being completely wrong becomes orders of magnitude higher. We have plenty of examples where such science was completely wrong and persisted in being wrong in spite of overwhelming evidence, as with race and IQ, nuclear winter, and pretty much everything in macroeconomics. Chances of mainstream being right, and contrarians being right are not too dissimilar in such cases.
Or, if the reference class is "science-y Doomsday predictors", then they're almost certainly completely wrong. See Paul Ehrlich (overpopulation), and Matt Simmons (peak oil) for some examples, both treated extremely seriously by mainstream media at time. So far in spite of countless cases of science predicting doom and gloom, not a single one of them turned out to be true, usually not just barely enough to be discounted by anthropic principle, but spectacularly so. Cornucopians were virtually always right.
It's also possible to use multiple reference classes - to view impact on climate according to "highly politicized science" reference class, and impact on human well-being according to "science-y Doomsday predictors" reference class, what's more or less how I think about it.
I'm sure if you thought hard enough, you could come up with other plausible reference classes, each leading to any conclusion you desire. I don't see how one of these reference class reasonings is obviously more valid than others, nor do I see any clear criteria for choosing the right reference class. It seems as subjective as Bayesian priors, except we know in advance we won't have evidence necessary for our views to converge.
The problem doesn't arise only if you agree to reference classes in advance, as you can reasonably do with the original application of forecasting costs of public projects. Does it kill reference class forecasting as a general technique, or is there a way to save it?