The basic theoretical justification for "consistency models" is the same as for what I'm proposing, yes, but:
The SnapFusion paper is similar to that paper, but with generation conditioned on text descriptions, which is why I linked that.
Consistency models are trained from scratch in the paper in addition to distilled from diffusion models. I think it'll probably just work with text-conditioned generation, but unclear to me w/o much thought how to do the equivalent of classifier-free guidance.
abbreviations
background on diffusion
NN autoencoders can be trained to convert between images and a LS where distance corresponds to image similarity. Like how most images are just "noise", most of that LS does not correspond to meaningful images. For simpler explanation, let's consider a simplified diffusion-based image generation model. It has a 2-dimensional LS, and 2 image categories: cats and dogs.
The "unconditional generation" task is to find a random point in the image LS which is inside any meaningful region. The "conditional generation" task is to find a meaningful point in image LS that would also be close to a target position in a description LS.
Training a diffusion NN involves taking real image LS points and creating a multistep path between them and random image LS points. The diffusion NN is trained to reverse those steps.
By training a new diffusion NN to replicate multiple diffusion NN steps (a type of distillation) it's possible to do the diffusion process in fewer steps. That technique is done in the SnapFusion paper which gets good results with just 8 steps. The number of diffusion steps can be adjusted, but using fewer steps gave worse results.
the problem
Why are multiple steps needed for good results? Why can't the "diffusion" be done in a single step? I believe the problem is related to LS structure.
Consider a random point P outside CAT and DOG, conditioned on a tag "animals" which may go to either region. The diffusion NN may be trained to direct the same (or nearly-identical) input to multiple different targets.
As a result, the diffusion NN will not provide an accurate direction from points that are far from meaningful target areas. That makes it necessary to use many small steps, both to "average out" diffusion NN output and to progressively get closer to regions where diffusion NN output is more accurate.
proposed solution
By training a NN to produce output which is more consistent and smooth than what diffusion NNs are trained to produce, we can reduce the above problem. To distinguish such NNs from diffusion NNs, I propose the name "coalescer networks".
Here is a process for training and using coalescer networks:
setup:
training step:
The training for direction can change quickly in some regions where distance is similar. By separating those outputs, we can keep output smooth by shrinking target_direction where direction changes rapidly. The magnitude of target_direction is then an indication of direction accuracy.
Training loss for target_distance could be: (target_distance - magnitude(R - close_pair.image))^2.
Training loss for target_direction could be: sqrt(magnitude(target_direction - normalize(close_pair.image - R))).
generation process:
Coalescer networks should generally give good results in 2 steps.
impact statement
My hope is that this technique will improve the speed of image generation tools, thereby reducing the disparity of image generation capability between individuals and large institutions, and thereby ultimately having a net positive societal impact.