Tuna-2: Pixel Embeddings Beat Vision Encoders for Multimodal Understanding and Generation

2026-04-27Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors created Tuna-2, a model that understands and generates images directly from raw pixels instead of using separate parts to process images for different tasks. This new approach simplifies the model by removing complex vision encoders usually used before, like VAEs. Their tests showed Tuna-2 matches or beats traditional methods in image-related tasks, especially in detailed visual understanding. The authors conclude that using raw pixel data without pretrained vision parts can lead to better and more scalable image models.

multimodal modelsvision encoderpixel embeddingspatch embeddingVAE (Variational Autoencoder)latent spaceimage generationvisual understandingend-to-end learningpretraining
Authors
Zhiheng Liu, Weiming Ren, Xiaoke Huang, Shoufa Chen, Tianhong Li, Mengzhao Chen, Yatai Ji, Sen He, Jonas Schult, Belinda Zeng, Tao Xiang, Wenhu Chen, Ping Luo, Luke Zettlemoyer, Yuren Cong
Abstract
Unified multimodal models typically rely on pretrained vision encoders and use separate visual representations for understanding and generation, creating misalignment between the two tasks and preventing fully end-to-end optimization from raw pixels. We introduce Tuna-2, a native unified multimodal model that performs visual understanding and generation directly based on pixel embeddings. Tuna-2 drastically simplifies the model architecture by employing simple patch embedding layers to encode visual input, completely discarding the modular vision encoder designs such as the VAE or the representation encoder. Experiments show that Tuna-2 achieves state-of-the-art performance in multimodal benchmarks, demonstrating that unified pixel-space modelling can fully compete with latent-space approaches for high-quality image generation. Moreover, while the encoder-based variant converges faster in early pretraining, Tuna-2's encoder-free design achieves stronger multimodal understanding at scale, particularly on tasks requiring fine-grained visual perception. These results show that pretrained vision encoders are not necessary for multimodal modelling, and end-to-end pixel-space learning offers a scalable path toward stronger visual representations for both generation and perception.