The forgetful functor FiltSet to Set does not have a left adjoint, and egregiously so - you have added just enough structure to rule out free filtered sets, and may want to make note of where this is important..
(S⊗-) has a right adjoint, suggesting the filtered structure to impose on function sets: The degree of a map f:S->T would be how far it falls short of being a morphism, , as this is what makes S⊗U->T one-to-one with U->(S->T).
This was a project conducted during MATS 5.0 under the mentorship of Vanessa Kosoy and supported by a grant from BERI. It builds off the String Machines framework (and depends on the linked post for certain definitions), which models category-theoretic generalizations of finite-state transducers. [EDIT: The results here can also be found presented in self-contained form, in this paper.]
The framework as it previously existed did not have representation-independent ways of bounding (analogues of) time complexity, or natural guarantees that output size would not grow exponentially in input size. We introduce "filtered" transducers, which operate on categories enriched over filtered sets (sets equipped with a function to a partially ordered monoid, where morphisms are functions respecting order), and then, restricting our attention to transducers with a finite state space, prove constraints on the time complexity growth and expressivity of string machines.
Parameterizing complexity in string machines
Filtered transducers
We will generally refer to objects in FiltSet solely by the symbol corresponding to the underlying set going forward. One can observe that the identity function on a set S by definition satisfies degS(idS(s))=degS(s) for all s∈S and is thus a morphism in FiltSet. One can also observe that given f:S→T and g:T→V, degV(g(f(s)))≤degT(f(s))≤degS(s) for all s∈S, and therefore g∘f is also a morphism in FiltSet. Therefore, FiltSet is indeed a category.
Due to the associativity and commutativity of addition, as well as the natural associativity and commutativity (up to isomorphisms which are still isomorphisms in FiltSet) of the cartesian product, −⊗− is naturally associative and commutative up to isomorphism. Additionally, the one-element set 1 equipped with deg1(⋅)=0 and unitor maps which are the same as in Set (which are, by their definition, filtered morphisms) provides a left and right unit for −⊗−, making FiltSet a symmetric monoidal category.
Remark. Suppose filtered sets S,T,U and filtered morphisms f:S→T and g:S→U. Then, the unique factoring function S→T×U defined by s↦(f(s),g(s)) is only a filtered morphism S→T⊗U if degT(f(s))+degU(g(s))≤degS(s), which does not hold in general. Therefore, −⊗− does not provide a product except for when at least one of the sets has degree uniformly zero. However, FiltSet does have finite products S×T where degS×T(s,t):=max(degS(s),degT(t)). We will not be using this construction.
Remark. The set-theoretic disjoint union, with its degree function being the canonical factoring map to N of its components' degree functions, provides all finite coproducts in FiltSet.
This expresses the notion of morphisms having degrees which are subadditive under composition in a way that naturally extends to a complexity constraint on transducers. As the monoidal identity of FiltSet is the single-element set with degree zero, the arrows IFiltSet→HomC(A,A) providing the identity morphism idA in the enrichment construction will ensure that identity morphisms are degree zero.
One can generalize the construction of Filtset by replacing N with any partially-ordered symmetric monoid (where the partial order is, obviously, translation-invariant as it is with the usual definition of a partially-ordered group). We will make use of categories filtered over N2 (with the partial order given by (a,b)≤(c,d) if and only if a≤c and b≤d) in our construction of the filtered state category, as follows:
We can now introduce filtered state categories.
Remark. CS is a filtered-morphism category with the degrees of morphisms being defined as in the definition of the category, as the identity morphism of each object A has an output function that outputs a variable of degree ax+b (the generator corresponding to the variable itself) for any variable of degree ax+b and is thus of degree zero. Additionally, as also demonstrated in the definition, degree is subadditive with composition, meaning that the composition arrow is indeed a morphism in FiltSet.
Given a filtered functor F:C→D, since the definition of an enriched functor requires the functor's action on morphisms to be provided by a morphism between hom-objects within the category, it automatically holds that degDF(α)≤degC(α) for all morphisms α in C.
Finite-state string machines
For the following section, we assume a filtered deterministic transducer T with input category C, output category D, structure functor F, input state space S(A), output state space S(B), primary input C-signature X→Y, auxiliary input D-signatures (Xi→Yi)1≤i≤n, and output D-signatures (X′j→Y′j)1≤j≤m. We will now demonstrate that requiring a finite output state space on a transducer provides us with certain output size and time complexity guarantees.
Corollary 2.1 can be used to bound the time complexity of string machines, if, fixing some model of computation, we have a bound on the time it takes to run a single transducer as a function of its inputs. We note that the coefficients bounding output size are multiplied together upon composition into either an auxiliary input or a primary input, but only composition into a primary input ties the degree of the state transition that will occur in the second transducer to the primary input degree of the first transducer. We define the sum bounded in the corollary as follows:
While we have established that, lacking a meta-vertex, IPDK will be linearly bounded by the total degree of its inputs, its scaling properties as a function of the number of transducers composed has no such guarantees. One can observe that composing more and more transducers that double their input size together causes exponential growth in IPDK. As we are interested in the time complexity of both string machines that incorporate a meta-vertex and of parameterized families of string machines, we must obtain some sufficient conditions for IPDK to grow polynomially in the number of transducers in our string machine. To help determine these conditions, we will define the output-degree-bounding parameters referenced in Proposition 2.
We can now prove some elementary sufficient conditions for IPDKN for a family {KN}N≤1 of string machines to grow polynomially in N.
N∑i=1O(1)O(logi)(n+iO(ik))≤N(NO(1)(n+O(Nk+1)))=Nℓ+1n+O(Nℓ+k+2)Remark. We have thus far only considered filtered transducers where the linear term of every variable in the input state space is non-vanishing. If we change this condition to only require that the constant term of the variable be non-vanishing, then the total number of generators in the N2-filtered category over the output category produced in the output is still bounded by a linear function of the primary input signature. So, the degree of the output function is now linear in the product of the degrees of the primary and auxiliary inputs. One can observe, however, that any composition of transducers will only cause outputs to be multiplied with each other a fixed number of times, so IPD will now be bounded by a polynomial in the total inputs of the string machine.
String machine description length
As string machines can themselves be thought of as morphisms in a category, and we are interested in extending our complexity constraints to the case where the meta-vertex is involved, we must imbue the category that string machines are morphisms in with a filtered category structure itself. While this could be accomplished by setting the degree of a string machine to just be the number of transducers involved (where a no-transducer string machine is just the identity and has degree zero), this obviously fails to capture the internal complexity of a transducer.
If we require that the input category of every non-meta-vertex transducer be finitely-generated, then we could make reference to the size of some fixed graph representation of a transducer's transition table and output functions corresponding to generators, and then have this size be additive when transducers are composed. However, as discussed in the previous section, this description size does not constrain time complexity for transducers which build other transducers and then run them. We discuss possible approaches in our conclusion.
The expressivity of finite-state string machines
Now that we have obtained guarantees for the scaling of IPD, which can be thought of as an analogue to the time complexity of running a string machine, in both the number of transducers in a string machine and in input size, we focus on characterizing the expressivity of string machines consisting of finite-state deterministic transducers, that is, filtered deterministic transducers where the state space corresponding to every element of the input category (not only the output state space) is finite.
In the following section, we will assume that all transducers have an input category that is of a form described as follows:
We can treat tapes as a special case of tape category, where all generators are endomorphisms of the generating object.
Without a meta-vertex
Finite-state string machines, even without a meta-vertex, can prepend to their output, which is not something the traditional definition of finite-state transducer can do. It is natural to investigate if this makes the framework more expressive than a deterministic finite automaton, e.g., having it recognize palindromes by reversing a string and then feeding the product of these two (their overlaying) into a transducer that then checks if each character matches up.
Unfortunately, given morphisms f:A→B and g:C→D in an input category (not necessarily a tape category) C, and their product f⊗g, we can see that, since F(f⊗g)=F(f⊗idD)∘F(idA⊗g)=F(idB⊗g)∘F(f⊗idC), a state transition on a collection of finite state spaces induced by the product of two morphisms can only capture the same sort of information that could be captured by independently running two finite-state transducers on both morphisms and comparing their states at the very end.
Thus, "per-character" comparisons of the sort that would require conditions analogous to aligning characters on tape would require infinite state spaces to represent the equivalent memory operations being performed. However, due to the nature of the auxiliary input, as well as the aforementioned prepending capabilities, we are clearly able to model functions that finite state transducers cannot, even without a meta-vertex.
To characterize the class of decision problems computable by non-meta-vertex finite-state string machines, we first prove the following lemma:
With this lemma, it now remains to show, roughly speaking, that for any string machine made out of finite-state deterministic transducers which decides some language (represented by morphisms in a tape category), the computation performed at every input morphism can be broken into a constant-bounded number of transitions of the type described in the lemma. We present a conjecture that finite-state string machines without a meta-vertex can only recognize regular languages, followed by a sketch for a proof of this conjecture.
Conjecture. Suppose a string machine composed only of finite-state transducers, where the input and output categories of each transducer are tape categories. Suppose this string machine has only one free input X→X in some tape category X where all generating morphisms (which form a set Σ) are endomorphisms of the generating object X, and has as its sole output a state space S(A), with some subset of S(A) designated as accepting. Then, taking an endomorphism X→X in X which does not include any instances of the copy morphism to be a string over the alphabet Σ, any language over Σ accepted by this string machine is regular.
Proof sketch. It suffices to prove that the computation involved in deciding this language takes constant space as a function of input size, as DSPACE(O(1))=REG. More precisely, given any input morphism g:X→X and a finite set of possible character strings that we can post-compose into g, we desire to show that there is a constant-bounded amount of information (independent of the size of g) that we need to store, which will allow us to calculate the output state after any of these character strings are added to g. By Proposition 1, each transducer involved in the string machine can create a constant-bounded number of duplicates of its auxiliary inputs within each of its outputs. This means that to predict the state of a given transducer after one of the operations described in Lemma 4 done to an auxiliary input, we need to store a constant amount of information. Moreover, due to the limitation caused by the size of a variable on the number of transitions one can make before we run out of variable space and must either limit ourselves to Lemma 4-like operations or reset, the number of "shapes" auxiliary inputs can exist in is absolutely bounded. The hope is that if we back-propagate until reaching the primary input of the first transducer, we will still have a constant-bounded amount of information to keep track of to predict the output state after the next character is reached.
Remark. This conjecture is already suggested by the combination of Corollary 2.1 and classical results by Kobayashi and Hennie which together imply that any one-tape Turing machine that runs in o(nlogn) time decides a regular language. The reason the proposition is not trivially implied by these results is that a naive simulation of the string machine in question on a single-tape Turing machine will cross the last cell of its initial input a guaranteed O(n) times (when going back and forth to read the next cell to write the inputs to feed to the next automaton in line). The conjecture would also automatically be implied if we prove that the composition of two finite-state transducers can be equivalently modeled by a single finite-state transducer.
With a meta-vertex
The meta-vertex, even at meta-level one, can clearly be seen to increase the expressivity of finite-state string machines. We construct a string machine which can be recognize palindromes as follows:
Given a tape category X where all generating morphisms are endomorphisms of the generating object X, mark a special generating endomorphism r. Given a generating endomorphism α≠r of X, we define Tα to be the filtered deterministic transducer with input and output category X, with input and output signatures X→X and three states, a fixed one of which is the starting state. When in the starting state, Tα will transition to the "accepting state" and store idX in the sole variable at that state if it encounters α, and transition to the "rejecting state" and store r if any other generating morphism is encountered. When in the rejecting state, all morphisms are mapped to the identity on the rejecting state. When in the accepting state, all morphisms transition back to the accepting state, but this time append the current primary input generator morphism to the output. The end result is that when given an endomorphism s:X→X, the transducer Tα will output s with its first "character" removed if and only if it starts with α, and output r otherwise. As a morphism, it is of signature (X→X)X→(X→X)X.
Now, we define a filtered deterministic transducer T with input category X, which outputs a filtered deterministic transducer without meta vertices. This transducer has input signature X→X and only one state, with one variable of signature (X→X)→(X→X), and it is assumed that no morphism X→X fed to T will include r. Given a generator morphism α of X, the transducer T will prepend Tα to the current variable's stored string machine. If we compose a meta vertex that takes the output of T and takes the same input morphism as T and passes it on to the string machine formed by T, then we obtain a string machine that outputs idX if it is given a palindrome, and r if it is fed anything else.
Conclusions
Thus far, we have proven results about the time complexity and expressivity of finite-state deterministic transducers in the string machines framework. This framework allows us to model the compositionality of transducers in a way that allows for more compression of description length. For example, we are able to implement the trivial time-space tradeoff of recognizing a regular language that is the intersection of several regular languages by running several automata in sequence instead of the product automaton, or the cascade product in Krohn-Rhodes theory by having a transducer "shuffle" its primary input using its ability to prepend.
Additionally, our time complexity scaling guarantees in the number of transducers in a string machine translate to sufficient conditions for transducers which build and run other transducers in runtime to run in polynomial time. This will allow us to take advantage of the additional compression opportunities the meta-vertex presents. These results are valuable for developing string machines into a framework that can be used to model hypothesis spaces representable by automata.
Further work
Description complexity that encapsulates time complexity
As exemplified by Definition 4, the construction of filtered sets can be extended to a partially-ordered monoid that is not N. Since a string machine without a meta-vertex will have the degrees of its outputs be bounded by a linear function of the degrees of its inputs, we can assign a degree ax+b to finite-state filtered transducers, where a and b are the maximal values for some sum of the coefficients appearing in the output degree in Definition 9, where the partial order is the same as in Definition 4. However, in this case, the monoid operation imposed on the polynomials is composition, due to how the linear relations bounding output size get composed when transducers are composed, and the filtered-set category is no longer symmetric monoidal since this operation is not commutative.
Now, the variables in our state categories would have their degrees be of the form ay+bx+c, where a morphism of degree dx+e can compose in a transducer of at most that degree, meaning the output variable is of degree a(y+(dx+e))+bx+c=ay+(dx+e)a∘bx+c by abuse of notation. where the superscript indicates a-fold self-composition. At further meta-levels, we keep adding terms, though it is unclear how we would disambiguate which variable to compose into for the monoid operation.
Characterizing the expressivity of transducers with a meta-vertex and monadic transducers
We have not yet established bounds on the expressivity of finite-state deterministic transducers which incorporate the meta-vertex, but, due to the bounding on levels of meta, it is likely that the languages they decide are some subset of context-free languages. The time complexity guarantees also need to be translated to the monadic case. Work on bounding the expressivity of monadic string machines may involve defining equivalents for string machines of various properties (see work by Mohri, Allauzen et al., Kostolanyi, and Bell et al.) which characterize determinizable weighted transducers, which may in turn allow us to bound the computational complexity of computing the distribution on output morphisms given by a monadic transducer.