cuRoboV2: Dynamics-Aware Motion Generation with Depth-Fused Distance Fields for High-DoF Robots

2026-03-05Robotics

Robotics
AI summary

The authors developed cuRoboV2, a new robotics system that improves how robots plan and perform movements safely and efficiently. They combined three innovations: smoother and physically realistic motion planning, faster and more detailed 3D perception using GPUs, and faster whole-body calculations for robots with many joints. Their system works much better than existing ones on complex robots like humanoids and is faster while using less memory. They also made the code easier to work with so AI assistants can help write it, making development quicker.

B-spline trajectory optimizationsigned distance fields (TSDF/ESDF)GPU accelerationhigh-DoF systemsinverse dynamicscollision avoidancerobot motion planninghumanoid robotsdifferentiable kinematicsrobot perception
Authors
Balakumar Sundaralingam, Adithyavairavan Murali, Stan Birchfield
Abstract
Effective robot autonomy requires motion generation that is safe, feasible, and reactive. Current methods are fragmented: fast planners output physically unexecutable trajectories, reactive controllers struggle with high-fidelity perception, and existing solvers fail on high-DoF systems. We present cuRoboV2, a unified framework with three key innovations: (1) B-spline trajectory optimization that enforces smoothness and torque limits; (2) a GPU-native TSDF/ESDF perception pipeline that generates dense signed distance fields covering the full workspace, unlike existing methods that only provide distances within sparsely allocated blocks, up to 10x faster and in 8x less memory than the state-of-the-art at manipulation scale, with up to 99% collision recall; and (3) scalable GPU-native whole-body computation, namely topology-aware kinematics, differentiable inverse dynamics, and map-reduce self-collision, that achieves up to 61x speedup while also extending to high-DoF humanoids (where previous GPU implementations fail). On benchmarks, cuRoboV2 achieves 99.7% success under 3kg payload (where baselines achieve only 72--77%), 99.6% collision-free IK on a 48-DoF humanoid (where prior methods fail entirely), and 89.5% retargeting constraint satisfaction (vs. 61% for PyRoki); these collision-free motions yield locomotion policies with 21% lower tracking error than PyRoki and 12x lower cross-seed variance than mink. A ground-up codebase redesign for discoverability enabled LLM coding assistants to author up to 73% of new modules, including hand-optimized CUDA kernels, demonstrating that well-structured robotics code can unlock productive human--LLM collaboration. Together, these advances provide a unified, dynamics-aware motion generation stack that scales from single-arm manipulators to full humanoids.