Following a bias is easy, maintaining balance is difficult. It's easier to describe what should not be done, but the opposite of a bias is usually just another bias. For example people doing the motivated stopping would accuse their opponents of motivated continuation, and vice versa. And the fact is that everyone has to stop collecting evidence at some point, because we do not have infinite time and attention. The correct moment to stop is when your best cost-benefit estimate says so, regardless of whether you like or dislike the current result. But this is easier said than done; humans are bad at thinking "regardless" of something emotionally important.
For the fallacy of gray I think the rational conclusion is to accept that nothing is literally 100% sure; on the other hand, if the chance is e.g. 99:1, we should not behave as if it is 50:50. I am not suggesting any specific number here, because I have very little information about the situation, but I can imagine a conclusion like this: -- "I feel 80 / 90 / 99 / 99.9 sure it was a genuine terror attack. If I receive convincing evidence in the future I would be willing to examine it and change my mind, but at this moment I have neither such evidence, nor a reason to believe I will get such evidence. So, probably / most likely / pretty sure, it was a genuine terror attack."
Whether you should assign probality 80 or 90 or 99 percent, that depends on how much you know, how much you trust what you know, and how good are you at estimating probabilities. Unless you are an expert in the domain, you probably shouldn't go beyond 99%. Unless you feel really sure, even 99% is too much.
An important reminder is that probability is in the mind, so for example, if an expert says the probability is 99%, and you say it is 80%, that does not mean that you contradict the expert; it simply means that you are not sure whether you can trust the expert. The expert may have a high certainty about the situation, but you may have low certainty about the situation (because you have less knowledge than the expert, and low certainty that you can believe the expert.)
For practical reasons, the question probably is: "Well, if I believe there is 80% chance it was a genuine terror attack, what does that mean for me? For which actions this chance is sufficiently high, and for which actions it is not sufficiently high?" That's relatively easy. For anything you want to do because of this information, imagine a 80% chance you are right, 20% chance you are wrong, and do a weighted average: would you be happy with the outcome? For example, if a genuine terror attack means you should do something for your protection, then I'd probably say: do it... and accept the 20% chance you might be doing it in vain, but 80% chance it will protect you. On the other hand, if you plan to do something that has relatively low utility if is was a genuine terror attack, but large negative consequences if it wasn't, then I'd probably say: don't do it, it's not worth the 20% risk.
After the terrorist attacks at Charlie Hebdo, conspiracy theories quickly arose about who was behind the attacks.
People who are critical to the west easily swallow such theories while pro-vest people just as easily find them ridiculous.
I guess we can agree that the most rational response would be to enter a state of aporia until sufficient evidence is at hand.
Yet very few people do so. People are guided by their previous understanding of the world, when judging new information. It sounds like a fine Bayesian approach for getting through life, but for real scientific knowledge, we can't rely on *prior* reasonings (even though these might involve Bayesian reasoning). Real science works by investigating evidence.
So, how do we characterise the human tendency to jump to conclusions that have simply been supplied by their sense of normativity. Is their a previously described bias that covers this case?