LottieGPT: Tokenizing Vector Animation for Autoregressive Generation

2026-04-13Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors created the first system that can generate vector animations, which are animations made from shapes and paths instead of pixels, making them easy to edit and resize. They built a special tool called the Lottie Tokenizer to turn complex animations into simple tokens that a computer can learn from. To train their system well, they gathered a huge dataset of 660,000 real-world vector animations. Using this, they trained LottieGPT, a model that can produce detailed and editable vector animations from text or images, performing better than previous models on similar tasks. Their approach helps computers understand and create animations made up of layers and keyframes in a way not done before.

vector animationLottietokenizerautoregressive generationmultimodal modelkeyframe animationQwen-VLJSONSVGdataset
Authors
Junhao Chen, Kejun Gao, Yuehan Cui, Mingze Sun, Mingjin Chen, Shaohui Wang, Xiaoxiao Long, Fei Ma, Qi Tian, Ruqi Huang, Hao Zhao
Abstract
Despite rapid progress in video generation, existing models are incapable of producing vector animation, a dominant and highly expressive form of multimedia on the Internet. Vector animations offer resolution-independence, compactness, semantic structure, and editable parametric motion representations, yet current generative models operate exclusively in raster space and thus cannot synthesize them. Meanwhile, recent advances in large multimodal models demonstrate strong capabilities in generating structured data such as slides, 3D meshes, LEGO sequences, and indoor layouts, suggesting that native vector animation generation may be achievable. In this work, we present the first framework for tokenizing and autoregressively generating vector animations. We adopt Lottie, a widely deployed JSON-based animation standard, and design a tailored Lottie Tokenizer that encodes layered geometric primitives, transforms, and keyframe-based motion into a compact and semantically aligned token sequence. To support large-scale training, we also construct LottieAnimation-660K, the largest and most diverse vector animation dataset to date, consisting of 660k real-world Lottie animation and 15M static Lottie image files curated from broad Internet sources. Building upon these components, we finetune Qwen-VL to create LottieGPT, a native multimodal model capable of generating coherent, editable vector animations directly from natural language or visual prompts. Experiments show that our tokenizer dramatically reduces sequence length while preserving structural fidelity, enabling effective autoregressive learning of dynamic vector content. LottieGPT exhibits strong generalization across diverse animation styles and outperforms previous state-of-the-art models on SVG generation (a special case of single-frame vector animation).