The text-guided video inpainting technique has significantly improved the performance of content generation applications. A recent family for these improvements uses diffusion models, which have become essential for achieving high-quality video inpainting results, yet they still face performance bottlenecks in temporal consistency and computational efficiency. This motivates us to propose a new video inpainting framework using optical Flow-guided Efficient Diffusion (FloED) for higher video coherence. Specifically, FloED employs a dual-branch architecture, where the time-agnostic flow branch restores corrupted flow first, and the multi-scale flow adapters provide motion guidance to the main inpainting branch. Besides, a training-free latent interpolation method is proposed to accelerate the multi-step denoising process using flow warping. With the flow attention cache mechanism, FLoED efficiently reduces the computational cost of incorporating optical flow. Extensive experiments on background restoration and object removal tasks show that FloED outperforms state-of-the-art diffusion-based methods in both quality and efficiency.
Video Demo
Please refer our project page for more details.
- [2025.04.13] 🔥 🔥 🔥 Inference code and weightd have been released.
- Release the inference code and weights.
- Release the latent interpolation code.
- Release the training code and evaluation benchmark.
You can install the necessary dependencies using the following command:
conda env create -f environment.yaml
Download the FloED weights.
We provides several examples for OR and BR tasks. Modifying the motion_module
path in the text file.
python -m scripts.animate --config configs/prompts/v3-inpainting.yaml
We have observed that saving results as GIFs may introduce noticeable flickering in videos. For higher-quality output, we recommend converting your results in MP4 format instead.
To test with your own videos and masks, you can use the Stable Diffusion Inpainting script located at Input/SD2/SD2.py
to generate an anchor frame.
This project is developed based on Animatediff. Please refer to their original license for usage details.
@article{gu2024advanced,
title={Advanced Video Inpainting Using Optical Flow-Guided Efficient Diffusion},
author={Gu, Bohai and Luo, Hao and Guo, Song and Dong, Peiran},
journal={arXiv preprint arXiv:2412.00857},
year={2024}
}