Revealing NVIDIA Closed-Source Driver Command Streams for CPU-GPU Runtime Behavior Insight
2026-04-29 • Performance
Performance
AI summaryⓘ
The authors studied how NVIDIA's CUDA software talks to the GPU hardware, which is usually hidden inside a closed-source driver. They found a way to see the exact commands sent to the GPU by hooking into the system at key points, allowing them to capture all GPU instructions in real-time. Using this, they analyzed data transfer methods and improvements in CUDA Graphs, showing how command visibility helps understand and improve GPU performance. Their work provides tools for better analysis and future design of GPU software and hardware.
CUDANVIDIA GPUuserspace driverhardware command streamDMAGPU doorbell registerCUDA Graphskernel driverperformance analysishardware-software co-design
Authors
Yuang Yan, Ian Karlin, Ryan Grant
Abstract
For NVIDIA GPUs, CUDA is the primary interface through which applications orchestrate GPU execution, yet much of the logic that realizes CUDA operations resides in NVIDIA's closed-source userspace driver. As a result, the translation from high-level CUDA APIs to low-level hardware commands remains opaque, limiting both software understanding and performance attribution. This paper makes that command path visible. We recover the hardware command streams emitted by NVIDIA's closed-source userspace driver with full integrity by leveraging the recently open-sourced kernel driver, instrumenting the memory-mapping path, and installing a hardware watchpoint on the userspace mapping of the GPU doorbell register. This lets us capture complete command submissions at the moment they are committed. Using this methodology, we present two case studies. For CUDA data movement, we identify the DMA submission modes selected by the driver and characterize their raw hardware performance independently of driver overhead through CUDA-bypassing controlled command issuance. For CUDA Graphs, we show that the reduced launch overhead in newer CUDA releases is associated with a smaller command footprint and a more efficient submission pattern. Together, these results show that command-level visibility provides a practical basis for understanding and optimizing GPU middleware behavior, improving performance interpretation, and informing future hardware--software co-design for CUDA and related accelerator stacks.