Why is the literature into reversible encoders/autoencoders/embedding generators not relevant for your specific usecase ?
Give an answer to that it might be easier to recommend stuff.
Sorry, I think I might have a superficial understanding of encoders and embeddings. Would you be able to try pointing out for me how decomposition is performed in that case (or point me toward a favorite reading on the subject)? When I think of feeding a sentence into an encoder, I can think of multiple ways in which some compositional structure might be inferred.
I'm drawing up a proof of concept with seq2seq learners right now, but my hypothesis is that they will be inadequate decomposers suitable only for benchmarking a baseline.
Hi, I'm working on a response to ML projects on IDA focusing on a specific decomposer, and I don't know if someone's formalized what a decomposer is in the general case.
Intuitively, a system is a decomposer if it can take a thing and break it down into sub-things with a specific vision about how the sub-things recombine.