Well, you can. It's just oxymoronic, or at least ironic. Because belief is contrary to the Bayesian paradigm.
You use Bayesian methods to choose an action. You have a set of observations, and assign probabilities to possible outcomes, and choose an action.
Belief in an outcome N means that you set p(N) ≈ 1 if p(N) > some threshold. It's a useful computational shortcut. But when you use it, you're not treating N in a Bayesian manner. When you categorize things into beliefs/nonbeliefs, and then act based on whether you believe N or not, you are throwing away the information contained in the probability judgement, in order to save computation time. It is especially egregious if the threshold you use to categorize things into beliefs/nonbeliefs is relatively constant, rather than being a function of (expected value of N) / (expected value of not N).
If your neighbor took out fire insurance on his house, you wouldn't infer that he believed his house was going to burn down. And if he took his umbrella to work, you wouldn't (I hope) infer that he believed it was going to rain.
Yet when it comes to decisions on a national scale, people cast things in terms of belief. Do you believe North Korea will sell nuclear weapons to Syria? That's the wrong question when you're dealing with a country that has, let's say, a 20% chance of building weapons that will be used to level at least ten major US cities.
Or flash back to the 1990s, before there was a scientific consensus that global warming was real. People would often say, "I don't believe in global warming." And interviews with scientists tried to discern whether they did or did not believe in global warming.
It's the wrong question. The question is what steps are worth taking according to your assigned probabilities and expected-value computations.
A scientist doesn't have to believe in something to consider it worthy of study. Do you believe an asteroid will hit the Earth this century? Do you believe we can cure aging in your lifetime? Do you believe we will have a hard-takeoff singularity? If a low-probability outcome can have a high impact on expected utility, you've already gone wrong when you ask the question.
No, I can't. But I can argue that no reduction occurs.
To be fair, I see your point in the case of politicians or people who are otherwise indisposed to changing their minds: once they say they believe something there are costs to subsequently saying they don't. That effectively makes it a binary distinction for them.
However, for people not in such situations, if I hear they believe X, that gives me new information about their internal state (namely, that they give X something like 55-85% chance of being the case). This doesn't lose information. I think this comprises most uses of believe/disbelieve.
So I would argue that it's not the believe/disbelieve distinction that is the problem; it's the feedback loop that results from us not letting people change their minds that causes issues to be forced into yes/no terms, combined with the need for politicians/public figures to get their thought to fit into a soundbite. I don't see how using other terms will ameliorate either of those problems.