Muninn: Your Trajectory Diffusion Model But Faster

2026-05-11Robotics

RoboticsPerformance
AI summary

The authors address the problem that diffusion-based robot motion planners are slow because they repeatedly compute costly steps. They introduce Muninn, a method that smartly decides when to reuse previous computations without losing accuracy, based on tracking uncertainty during the planning process. This approach speeds up planning by up to 4.6 times without needing to retrain models or compromise safety, and it works across different types of diffusion planners. The authors also tested Muninn on real robots to confirm its effectiveness in real-time tasks.

diffusion modelstrajectory planningrobot motiondenoisingreal-time controluncertainty estimationcached computationssampling accelerationclosed-loop navigationmodel-free optimization
Authors
Gokul Puthumanaillam, Hao Jiang, Ruben Hernandez, Jose Fuentes, Paulo Padrao, Leonardo Bobadilla, Melkior Ornik
Abstract
Diffusion-based trajectory planners can synthesize rich, multimodal robot motions, but their iterative denoising makes online planning and control prohibitively slow. Existing accelerations either modify the sampler or compress the network--sacrificing plan quality or requiring retraining without accounting for downstream control risk. We address the problem of making diffusion-based trajectory planners fast enough for real-time robot use without retraining the model or sacrificing trajectory quality, and in a way that works across diverse state-space diffusion architectures. Our key insight is that diffusion trajectory planners expose two signals we can exploit: a cheap probe of how their internal trajectory representation changes across steps, and analytic coefficients that describe how denoiser errors affect the sampler's state update. By calibrating the first signal against the second on offline runs, we obtain a per-step score that upper-bounds how far the final trajectory can deviate when we reuse a cached denoiser output, and we treat this bound as an uncertainty budget that we can spend over the denoising process. Building on this insight, we present Muninn, a training-free caching wrapper that tracks this uncertainty budget during sampling and, at each diffusion step, chooses between reusing a cached denoiser output when the predicted deviation is small and recomputing the denoiser when it is not. Across standard benchmarks Muninn delivers up to 4.6x wall-clock speedups across several trajectory diffusion models by reducing denoiser evaluations, while preserving task performance and safety metrics. Muninn further certifies that cached rollouts remain within a specified distance of their full-compute counterparts, and we validate these gains in real-time closed-loop navigation and manipulation hardware deployments. Project page: https://github.com/gokulp01/Muninn.