* Equal contribution
We sadly found out our CTM paper (ICLR24) was plagiarized by TCD! It's unbelievable😢—they not only stole our idea of trajectory consistency but also committed "verbatim plagiarism," literally copying our proofs word for word! Please help me spread this. pic.twitter.com/aR6pRjhj5X
— Dongjun Kim (@gimdong58085414) March 25, 2024
We regret to hear about the serious accusations from the CTM team @gimdong58085414. I shall proceed to elucidate the situation and make an archive here. We already have several rounds of communication with CTM's authors. https://t.co/BKn3w1jXuh
— Michael (@Merci0318) March 26, 2024
Latent Consistency Model (LCM) extends the Consistency Model to the latent space and leverages the guided consistency distillation technique to achieve impressive performance in accelerating text-to-image synthesis. However, we observed that LCM struggles to generate images with both clarity and detailed intricacy.
To address this limitation, we initially delve into and elucidate the underlying causes. Our investigation identifies that the primary issue stems from errors in three distinct areas. Consequently, we introduce Trajectory Consistency Distillation (TCD), which encompasses trajectory consistency function (TCF) and strategic stochastic sampling (SSS) .
The trajectory consistency function diminishes the distillation errors by broadening the scope of the self-consistency boundary condition with a semi-linear consistency function and endowing the TCD with the ability to accurately trace the entire trajectory of the Probability Flow ODE.
Experiments demonstrate that TCD not only significantly enhances image quality at low NFEs but also yields more detailed results compared to the teacher model at high NFEs.
TCD maintains superior generative quality at both low NFEs and high NFEs, even exceeding the performance of DPM-Solver++(2S) with origin SDXL. It is worth noting that there is no additional discriminator or LPIPS supervision included during training.
We demonstrate some examples at 20 NFEs below.
The NFEs for TCD sampling can be varied at will (compared with Turbo series), without adversely affecting the quality of the results (compared with LCMs).
We compare the performance of TCD and LCM at increasing NFEs.
During inference, the level of detail in the image can be simply modified by adjusing one hyper-parameter gamma. This option does not require the introduction of any additional trainable 32parameters.
TCD can be adapted to various SDXL-based expansions and plugins in the community, for instance, LoRA, ControlNet, IP Adapter, and other base models, e.g. Animagine XL.
@misc{zheng2024trajectory,
title = {Trajectory Consistency Distillation},
author = {Jianbin Zheng and Minghui Hu and Zhongyi Fan and Chaoyue Wang and Changxing Ding and Dacheng Tao and Tat-Jen Cham},
archivePrefix = {arXiv},
eprint = {2402.19159},
year = {2024},
primaryClass = {cs.CV}
}
The website style was inspired from DreamFusion.