Trajectory Consistency Distillation

Improved Latent Consistency Distillation by Semi-Linear Consistency Function with Trajectory Mapping

Jianbin Zheng *
South China University of Technology
Minghui Hu *
Nanyang Technological University
Zhongyi Fan
Beijing Institute of Technology
Chaoyue Wang
University of Sydney
Changxing Ding
South China University of Technology
Dacheng Tao
Nanyang Technological University
Tat-Jen Cham
Nanyang Technological University

* Equal contribution


A Solemn Statement Regarding the Plagiarism Allegations

We regret to hear about the serious accusations from the CTM team.

Before this post, we already had several rounds of communication with CTM's authors. We shall proceed to elucidate the situation here.

  1. In the first arXiv version (which is accused of plagiarism), we have provided citations and discussion in A. Related Works:

  2. Kim et al. (2023) proposes a universal framework for CMs and DMs. The core design is similar to ours, with the main differences being that we focus on reducing error in CMs, subtly leverage the semi-linear structure of the PF ODE for parameterization, and avoid the need for adversarial training.


  3. In the first arXiv version, we have indicated in D.3 Proof of Theorem 4.2

    In this section, our derivation mainly borrows the proof from (Kim et al., 2023; Chen et al., 2022).

    and we have never intended to claim credits.

    As we have mentioned in our email, we would like to extend a formal apology to the CTM authors for the clearly inadequate level of referencing in our paper. We will provide more credits in the revised manuscript.

  4. In the updated second arXiv version, we have expanded our discussion to elucidate the relationship with CTM. Additionally, we have removed some proofs that were previously included for completeness.

  5. CTM and TCD are different from motivation, method to experiments. TCD is founded on the principles of the Latent Consistency Model (LCM), aimed to design an effective consistency function by utilizing the exponential integrators.

  6. The experimental results also cannot be obtained from any type of CTM algorithm.
    • Here we provide a simple method to check: use our sampler here to sample the checkpoint CTM released, or vice versa.
    • CTM also provided a training script. We welcome anyone to reproduce the experiments on SDXL based on the CTM algorithm.

We believe the assertion of plagiarism is not only severe but also detrimental to the academic integrity of the involved parties. We earnestly hope that everyone involved gains a more comprehensive understanding of this matter.

Abstract

Latent Consistency Model (LCM) extends the Consistency Model to the latent space and leverages the guided consistency distillation technique to achieve impressive performance in accelerating text-to-image synthesis. However, we observed that LCM struggles to generate images with both clarity and detailed intricacy.

overview
Comparison between TCD and other state-of-the-art methods. TCD delivers exceptional results in terms of both quality and speed, completely surpassing LCM. Notably, LCM experiences a notable decline in quality at high NFEs. In contrast, TCD maintains superior generative quality at high NFEs, even exceeding the performance of DPM-Solver++(2S) with origin SDXL.

To address this limitation, we initially delve into and elucidate the underlying causes. Our investigation identifies that the primary issue stems from errors in three distinct areas. Consequently, we introduce Trajectory Consistency Distillation (TCD), which encompasses trajectory consistency function (TCF) and strategic stochastic sampling (SSS) .

train
Training process, wherein the TCF expands the boundary conditions to an arbitrary timestep of s, thereby reducing the theoretical upper limit of error.
inference
Sampling process, where SSS significantly reduces accumulated error through the iterative traversal with the stochastic parameter compared to the multistep consistency sampling

The trajectory consistency function diminishes the distillation errors by broadening the scope of the self-consistency boundary condition with a semi-linear consistency function and endowing the TCD with the ability to accurately trace the entire trajectory of the Probability Flow ODE.

Experiments demonstrate that TCD not only significantly enhances image quality at low NFEs but also yields more detailed results compared to the teacher model at high NFEs.


Better than Teacher w/o Additional Supervision

TCD maintains superior generative quality at both low NFEs and high NFEs, even exceeding the performance of DPM-Solver++(2S) with origin SDXL. It is worth noting that there is no additional discriminator or LPIPS supervision included during training.

We demonstrate some examples at 20 NFEs below.

SDXL w/ DPM-Solver
TCD
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
Comparision with SDXL

Flexible NFEs

The NFEs for TCD sampling can be varied at will (compared with Turbo series), without adversely affecting the quality of the results (compared with LCMs).

We compare the performance of TCD and LCM at increasing NFEs.

LCM
NFE 4
NFE 12
NFE 20
NFE 30
NFE 50
First Image Grid
TCD
Second Image Grid
LCM
Third Image Grid
TCD
Fourth Image Grid
Comparision with LCM at increasing NFEs

Freely Change the Detailing

During inference, the level of detail in the image can be simply modified by adjusing one hyper-parameter gamma. This option does not require the introduction of any additional trainable 32parameters.

LCM
TCD (increasing 𝛾)
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
First Image Grid
Comparision with LCM at different 𝛾

Versatility

TCD can be adapted to various SDXL-based expansions and plugins in the community, for instance, LoRA, ControlNet, IP Adapter, and other base models, e.g. Animagine XL.

overview

Related and Concurrent Work

  • Luo S, Tan Y, Huang L, et al. Latent consistency models: Synthesizing high-resolution images with few-step inference. arXiv preprint arXiv:2310.04378, 2023.
  • Luo S, Tan Y, Patil S, et al. LCM-LoRA: A universal stable-diffusion acceleration module. arXiv preprint arXiv:2311.05556, 2023.
  • Lu C, Zhou Y, Bao F, et al. DPM-Solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Advances in Neural Information Processing Systems, 2022, 35: 5775-5787.
  • Lu C, Zhou Y, Bao F, et al. DPM-solver++: Fast solver for guided sampling of diffusion probabilistic models. arXiv preprint arXiv:2211.01095, 2022.
  • Zhang Q, Chen Y. Fast sampling of diffusion models with exponential integrator. ICLR 2023, Kigali, Rwanda, May 1-5, 2023.
  • Kim D, Lai C H, Liao W H, et al. Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion. ICLR 2024.

Citation

@misc{zheng2024trajectory,
  title = {Trajectory Consistency Distillation},
  author = {Jianbin Zheng and Minghui Hu and Zhongyi Fan and Chaoyue Wang and Changxing Ding and Dacheng Tao and Tat-Jen Cham},
  archivePrefix = {arXiv},
  eprint = {2402.19159},
  year = {2024},
  primaryClass = {cs.CV}
}
Acknowledgements

The website style was inspired from DreamFusion.