Kiwi-Edit
Collection
11 items β’ Updated β’ 14
import torch
from diffusers import DiffusionPipeline
from diffusers.utils import load_image, export_to_video
# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("linyq/kiwi-edit-5b-reference-only-diffusers", dtype=torch.bfloat16, device_map="cuda")
pipe.to("cuda")
prompt = "A man with short gray hair plays a red electric guitar."
image = load_image(
"https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/guitar-man.png"
)
output = pipe(image=image, prompt=prompt).frames[0]
export_to_video(output, "output.mp4")Configuration Parsing Warning:In UNKNOWN_FILENAME: "diffusers._class_name" must be a string
Kiwi-Edit is a versatile video editing framework built on an MLLM encoder and a video Diffusion Transformer (DiT). It supports:
The model synergizes learnable queries and latent visual features for reference semantic guidance, achieving significant gains in instruction following and reference fidelity.
To use Kiwi-Edit for inference, follow the installation instructions in the official repository. You can run a quick test on a demo video using the following command:
python diffusers_demo.py \
--video_path ./demo_data/video/source/0005e4ad9f49814db1d3f2296b911abf.mp4 \
--prompt "Remove the monkey." \
--save_path output.mp4 --model_path linyq/kiwi-edit-5b-instruct-only-diffusers
If you use Kiwi-Edit in your research, please cite the following work:
@misc{kiwiedit,
title={Kiwi-Edit: Versatile Video Editing via Instruction and Reference Guidance},
author={Yiqi Lin and Guoqiang Liang and Ziyun Zeng and Zechen Bai and Yanzhe Chen and Mike Zheng Shou},
year={2026},
eprint={2603.02175},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.02175},
}