A quick google search brings up "computation theory". This does not seem to be as precise as information theory, we cannot talk about "n units of computation" the same way we can talk about "m bits of information". In fact there does not seem to be a generally accepted fundamental unit of computation which we can talk about.
Computational complexity theory is well-developed but only talks about scaling.
Turing machines seem to be able to "hide" a lot of the work in the number of available symbols to go on the tape or the number of available states of the pointer.
Logic circuits haven't produced much and seem a bit arbitrary (why only OR, AND, and NOT gates?).
Seems like we need a theory of computation to qualitatively understand things like logical uncertainty.
The hope in the orginial question was to somehow have a method of saying "this thing has n compute in it".
I am a bit unsure whether control structures and such can be faithfully preserved. But it seems if 00,01,10,11 can be translated to 0,1,2,3 then a ...010101010111 could be translated into ...123231412 and the very same process could be applied to turn 01,02,03,11,12,13,21,22,23,31,32,33 into A,B,C,D,E,F,G,H,I,J,L,M. Even if we can't get to a single symbol at any given point we can get increased performance by predictable increase in alphabeth and this will not run out. That is for any N for all binary strings of that length there exists a pyramid of letter subsititution where at the top each string is covered by a single letter. That is the trick deosn't rely on actual infinities, it also works for all finite numbers.