It's generally accepted here that theories are valuable to the extent that they provide testable predictions. Being falsifiable means that incorrect theories can be discarded and replaced with theories that better model reality (see Making Beliefs Pay Rent). Unfortunately, reality doesn't play nice and we will sometimes possess excellent theoretical reasons for believing a theory, but that theory will possess far too many degrees of freedom to make it easily falsifiability.
The prototypical example are the kinds of hypotheses that are produced by evolutionary psychology. Clearly all aspects of humanity have been shaped by evolution and the idea that our behaviour is an exception would be truly astounding. In fact, I'd say that it is something of an anti-prediction.
But what use is a theory that doesn't make any solid predictions? Firstly, believing in such a theory will normally have a significant impact on your priors, even if no-one observation would provide strong evidence of its falsehood. But secondly, if the existing viable theories all claim A and you propose viable a theory that would be compatible with A or B, then that would make B viable again. And sometimes that can be a worthy contribution in and of itself. Indeed, you can have a funny situation arise where people nominally reject a theory for not sufficiently constraining expectations, while really opposing it because of how people's expectations would adjust if the theory was true.
See also: Building Intuitions on Non-Empirical Arguments in Science
[note: not sure where I saw this concept, and I haven't explored it enough to know if it's useful]
Some things called "theories" aren't predictive, but are explanatory. Such models may be useful for organizing your beliefs, rather than for updating your beliefs.
The idea would be that these kinds of frameworks can improve the salience or accessibility of information, used when evaluating or executing more predictive models. Human brains can't actually access all the details of all the evidence they have experienced, so indexing is necessary to help determine which are available.
Thinking more about it, though, this may be just a restatement of what ALL models do - they're not evidence in themselves, they're filters on evidence to make the quantity manageable and the weightings useful.