A quick google search brings up "computation theory". This does not seem to be as precise as information theory, we cannot talk about "n units of computation" the same way we can talk about "m bits of information". In fact there does not seem to be a generally accepted fundamental unit of computation which we can talk about.
Computational complexity theory is well-developed but only talks about scaling.
Turing machines seem to be able to "hide" a lot of the work in the number of available symbols to go on the tape or the number of available states of the pointer.
Logic circuits haven't produced much and seem a bit arbitrary (why only OR, AND, and NOT gates?).
Seems like we need a theory of computation to qualitatively understand things like logical uncertainty.
Oddly, it is simply because it is a much harder problem. The basic theories for representing information are the basic theories of computation, but then you have to manipulate and transform them. Different ways of representing this make certain computations much simpler, and others much more complex. Software is equivalent to hardware, so anything that can be done in software would have to be fundamentally possible in dedicated hardware, that would find it incredibly easier to compute than the general purpose stuff. Literally everything that can be done with a computer can be done with a dedicated machine that would find it much easier than a normal computer.
And, Or, and Not gates are simply for human convenience. We find them easy to reason about, and don't need to use to many to describe/make complicated things. The computers themselves use Nand or Nor.
You might want to look at trinary, which was a perfectly good logic that can include uncertainty. It is perhaps slightly better for computers than binary, but only the Soviet Union used it, so it was a casualty of the far superior western economy.