Trajectory Consistency Distillation

Jianbin Zheng *
South China University of Technology
Minghui Hu *
Nanyang Technological University
Zhongyi Fan
Beijing Institute of Technology
Chaoyue Wang
University of Sydney
Changxing Ding
South China University of Technology
Dacheng Tao
Nanyang Technological University
Tat-Jen Cham
Nanyang Technological University

* Equal contribution;   Corresponding author


A Solemn Statement Regarding the Plagiarism Allegations

We regret to hear about the serious accusations from the CTM team.

Before this post, we already had several rounds of communication with CTM's authors. We shall proceed to elucidate the situation here.

  1. In our first arXiv pre-print, we have indicated "mainly borrows the proof from CTM" and have never intended to claim credits. As we have mentioned in our email, we would like to extend a formal apology to the CTM authors for the clearly inadequate level of referencing in our paper. We will provide more credits in the revised manuscript. Moreover, the current version of the manuscript is only preprint will not be featured in any peer-reviewed conference proceedings.
  2. Furthermore, the sections under accusation bear no relation to the core contributions of TCD. The represented accused parts are supplementary rather than fundamental. Additionally, upon review, we also noticed that the accused proof inconsistent with TCD raw hypothesis. Hence, we have provided a rigorous proof of our theoretical aspects within the framework of our study, which is predicated upon DPMSolver and DEIS. We also provide the proof in the email.
  3. CTM and TCD are different from motivation, method to experiments. The experimental results also cannot be obtained from any type of CTM algorithm.
    • Here we provide a simple method to check: use our sampler here to sample the checkpoint CTM released, or vice versa.
    • CTM also provided a training script. We welcome anyone to reproduce the experiments on SDXL based on the CTM algorithm.
We believe the assertion of plagiarism is not only severe but also detrimental to the academic integrity of the involved parties. We earnestly hope that everyone involved gains a more comprehensive understanding of this matter.
All related docs, e.g. email chain and proof, can be found here.

Abstract

Latent Consistency Model (LCM) extends the Consistency Model to the latent space and leverages the guided consistency distillation technique to achieve impressive performance in accelerating text-to-image synthesis. However, we observed that LCM struggles to generate images with both clarity and detailed intricacy.

overview
Comparison between TCD and other state-of-the-art methods. TCD delivers exceptional results in terms of both quality and speed, completely surpassing LCM. Notably, LCM experiences a notable decline in quality at high NFEs. In contrast, TCD maintains superior generative quality at high NFEs, even exceeding the performance of DPM-Solver++(2S) with origin SDXL.

To address this limitation, we initially delve into and elucidate the underlying causes. Our investigation identifies that the primary issue stems from errors in three distinct areas. Consequently, we introduce Trajectory Consistency Distillation (TCD), which encompasses trajectory consistency function (TCF) and strategic stochastic sampling (SSS) .

train
Training process, wherein the TCF expands the boundary conditions to an arbitrary timestep of s, thereby reducing the theoretical upper limit of error.
inference
Sampling process, where SSS significantly reduces accumulated error through the iterative traversal with the stochastic parameter compared to the multistep consistency sampling

The trajectory consistency function diminishes the distillation errors by broadening the scope of the self-consistency boundary condition and endowing the TCD with the ability to accurately trace the entire trajectory of the Probability Flow ODE. Additionally, strategic stochastic sampling is specifically designed to circumvent the accumulated errors inherent in multi-step consistency sampling, which is meticulously tailored to complement the TCD model.

Experiments demonstrate that TCD not only significantly enhances image quality at low NFEs but also yields more detailed results compared to the teacher model at high NFEs.


Better than Teacher w/o Additional Supervision

TCD maintains superior generative quality at both low NFEs and high NFEs, even exceeding the performance of DPM-Solver++(2S) with origin SDXL. It is worth noting that there is no additional discriminator or LPIPS supervision included during training.

We demonstrate some examples at 20 NFEs below.

SDXL w/ DPM-Solver
TCD
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
Comparision with SDXL

Flexible NFEs

The NFEs for TCD sampling can be varied at will (compared with Turbo series), without adversely affecting the quality of the results (compared with LCMs).

We compare the performance of TCD and LCM at increasing NFEs. Due to accumulated errors in multistep sampling, LCM experiences a loss of image detail, leading to a degradation in performance, whereas TCD addresses this issue.

LCM
NFE 4
NFE 12
NFE 20
NFE 30
NFE 50
First Image Grid
TCD
Second Image Grid
LCM
Third Image Grid
TCD
Fourth Image Grid
Comparision with LCM at increasing NFEs

Freely Change the Detailing

During inference, the level of detail in the image can be simply modified by adjusing one hyper-parameter gamma. This option does not require the introduction of any additional parameters.

LCM
TCD (increasing 𝛾)
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
Comparision with LCM at different 𝛾

Versatility

TCD can be adapted to various SDXL-based expansions and plugins in the community, for instance, LoRA, ControlNet, IP Adapter, and other base models, e.g. Animagine XL.

overview

Citation

@misc{zheng2024trajectory,
  title = {Trajectory Consistency Distillation},
  author = {Jianbin Zheng and Minghui Hu and Zhongyi Fan and Chaoyue Wang and Changxing Ding and Dacheng Tao and Tat-Jen Cham},
  archivePrefix = {arXiv},
  eprint = {2402.19159},
  year = {2024},
  primaryClass = {cs.CV}
}
Acknowledgements

The website style was inspired from DreamFusion.