Strait: Perceiving Priority and Interference in ML Inference Serving
2026-04-30 • Machine Learning
Machine Learning
AI summaryⓘ
The authors developed a system called Strait to help computers better handle different priority tasks when running AI models on GPUs. Strait predicts delays by considering how data moves and how different tasks interfere with each other, improving the timing estimates. Using these predictions, it schedules tasks so that high-priority jobs meet their deadlines more often, even when GPUs are very busy. Their tests show Strait reduces missed deadlines for important tasks without hurting less important ones too much, and it shares resources more fairly than some other methods.
machine learning inferencedeep neural networksGPU schedulingtask prioritizationlatency estimationkernel execution interferencedeadline satisfactionpreemptionpriority-aware schedulingdata transfer contention
Authors
Haidong Zhao, Nikolaos Georgantas
Abstract
Machine learning (ML) inference serving systems host deep neural network (DNN) models and schedule incoming inference requests across deployed GPUs. However, limited support for task prioritization and insufficient latency estimation under concurrent execution may restrict their applicability in on-premises scenarios. We present \emph{Strait}, a serving system designed to enhance deadline satisfaction for dual-priority inference traffic under high GPU utilization. To improve latency estimation, Strait models potential contention during data transfer and accounts for kernel execution interference through an adaptive prediction model. By drawing on these predictions, it performs priority-aware scheduling to deliver differentiated handling. Evaluation results under intense workloads suggest that Strait reduces deadline violations for high-priority tasks by 1.02 to 11.18 percentage points while incurring acceptable costs on low-priority tasks. Compared to software-defined preemption approaches, Strait also exhibits more equitable performance.