Statistical Analysis for Energy-Efficient Satellite Edge Computing with Latency Guarantees

2026-05-11Networking and Internet Architecture

Networking and Internet Architecture
AI summary

The authors studied how to guarantee fast response times (latency) for computing tasks done on satellites orbiting close to Earth, which is important for integrating these satellites into future 5G and 6G networks. They analyzed how unpredictable delays in both communication and computing affect performance using real hardware and image recognition software. Their model helps balance quick data processing with uncertainty, enabling smart choices about computer settings to meet time goals while saving energy. They found that combining statistical methods can meet strict deadlines 95% of the time and cut energy use by half compared to simpler methods. This approach can be used for many types of satellite computing tasks and different hardware.

Latency guaranteesLEO satellitesEdge computing5G and 6G networksStatistical latency analysisQuantile regressionGPU clock frequencyEnergy efficiencyCommunication delaysExecution time estimation
Authors
Nicolai Dalsgaard Lyholm, Beatriz Soret, Tijana Devaja, Thomas Grundgaard Mulvad, Cedomir Stefanovic, Israel Leyva-Mayorga
Abstract
Being able to provide latency guarantees for orbital edge computing applications through Low Earth Orbit (LEO) satellite constellations is a major milestone for their integration into 5G and 6G networks. However, achieving this is fundamentally challenged by the inherent randomness in both communication and computing latency, driven by complex network dynamics, satellite motion, and hardware variability. In this paper, we perform a statistical analysis of the latency of satellite edge computing using representative computing hardware and an object detection algorithm running on a satellite image dataset. The resulting model captures the trade-off between data availability and estimation uncertainty, enabling data-driven optimization methods to meet latency targets with statistical guarantees while minimizing energy consumption. Our results show that parametric estimation and quantile regression for the execution time of the image processing algorithms can be effectively combined with models for the communication latency to select an optimal GPU clock frequency. This achieves a 95% probability of meeting a $500$ ms end-to-end deadline while reducing energy consumption by more than 50% compared to a baseline that relies on a Chebyshev-Cantelli inequality to bound execution-time quantiles. The proposed framework is generalizable across satellite edge computing workloads and hardware platforms.