Predictive Autoscaling for Node.js on Kubernetes: Lower Latency, Right-Sized Capacity

2026-04-21Software Engineering

Software EngineeringDistributed, Parallel, and Cluster Computing
AI summary

The authors discuss limitations in how Kubernetes scales Node.js applications, pointing out that existing methods react only after overload starts, causing delays and poor performance. They propose a proactive scaling method that predicts future load and adjusts resources before problems arise. Their approach uses aggregate metrics to avoid confusing feedback loops and turns messy data into reliable signals for scaling decisions. Tests show their method maintains good response times much better than current solutions.

KubernetesNode.jsHorizontal Pod AutoscalerKEDAscaling algorithmevent loopCPU utilizationlatency SLOpredictive scalingautoscaling metrics
Authors
Ivan Tymoshenko, Luca Maraschi, Matteo Collina
Abstract
Kubernetes offers two default paths for scaling Node.js workloads, and both have structural limitations. The Horizontal Pod Autoscaler scales on CPU utilization, which does not directly measure event loop saturation: a Node.js pod can queue requests and miss latency SLOs while CPU reports moderate usage. KEDA extends HPA with richer triggers, including event-loop metrics, but inherits the same reactive control loop, detecting overload only after it has begun. By the time new pods start and absorb traffic, the system may already be degraded. Lowering thresholds shifts the operating point but does not change the dynamic: the scaler still reacts to a value it has already crossed, at the cost of permanent over-provisioning. We propose a predictive scaling algorithm that forecasts where load will be by the time new capacity is ready and scales proactively based on that forecast. Per-instance metrics are corrupted by the scaler's own actions: adding an instance redistributes load and changes every metric, even if external traffic is unchanged. We observe that operating on a cluster-wide aggregate that is approximately invariant under scaling eliminates this feedback loop, producing a stable signal suitable for short-term extrapolation. We define a metric model (a set of three functions that encode how a specific metric relates to scaling) and a five-stage pipeline that transforms raw, irregularly-timed, partial metric data into a clean prediction signal. In benchmarks against HPA and KEDA under steady ramp and sudden spike, the algorithm keeps per-instance load near the target threshold throughout. Under the steady ramp, median latency is 26ms, compared to 154ms for KEDA and 522ms for HPA.