HDR Video Generation via Latent Alignment with Logarithmic Encoding

2026-04-13Computer Vision and Pattern Recognition

Computer Vision and Pattern Recognition
AI summary

The authors show that creating high dynamic range (HDR) images with generative models is easier than expected by using a special logarithmic encoding that matches what these models already understand. Instead of rebuilding models from scratch, they fine-tune pre-trained video models with this encoding and simulate camera effects to help the model guess missing details. This approach allows the generation of high-quality HDR videos across different scenes without needing complicated new methods. Their work suggests that aligning the HDR data representation with existing model knowledge is key to success.

High Dynamic Range (HDR)Generative ModelsLogarithmic EncodingPretrained ModelsLatent SpaceFine-tuningVideo GenerationCamera DegradationsImage RepresentationImage Formation
Authors
Naomi Ken Korem, Mohamed Oumoumad, Harel Cain, Matan Ben Yosef, Urska Jelercic, Ofir Bibi, Yaron Inger, Or Patashnik, Daniel Cohen-Or
Abstract
High dynamic range (HDR) imagery offers a rich and faithful representation of scene radiance, but remains challenging for generative models due to its mismatch with the bounded, perceptually compressed data on which these models are trained. A natural solution is to learn new representations for HDR, which introduces additional complexity and data requirements. In this work, we show that HDR generation can be achieved in a much simpler way by leveraging the strong visual priors already captured by pretrained generative models. We observe that a logarithmic encoding widely used in cinematic pipelines maps HDR imagery into a distribution that is naturally aligned with the latent space of these models, enabling direct adaptation via lightweight fine-tuning without retraining an encoder. To recover details that are not directly observable in the input, we further introduce a training strategy based on camera-mimicking degradations that encourages the model to infer missing high dynamic range content from its learned priors. Combining these insights, we demonstrate high-quality HDR video generation using a pretrained video model with minimal adaptation, achieving strong results across diverse scenes and challenging lighting conditions. Our results indicate that HDR, despite representing a fundamentally different image formation regime, can be handled effectively without redesigning generative models, provided that the representation is chosen to align with their learned priors.