Sustainability Is Not Linear: Quantifying Performance, Energy, and Privacy Trade-offs in On-Device Intelligence

2026-03-27Software Engineering

Software EngineeringArtificial IntelligenceMachine Learning
AI summary

The authors studied how running large language models on a smartphone affects battery life, speed, and response quality. They tested different models on a Samsung Galaxy S25 Ultra without special device access to get realistic data. Surprisingly, they found that shrinking model size by certain clever methods doesn’t save much energy, meaning the model design itself matters more for battery use. They also discovered that some special models (MoE) can handle big sizes while using energy like much smaller models. Overall, they suggest mid-sized models strike the best balance between quality and energy use on mobile devices.

Large Language ModelsEdge DevicesModel QuantizationEnergy ConsumptionLatencyMixture-of-Experts (MoE)Mixed-PrecisionMobile AISamsung Galaxy S25 UltraModel Architecture
Authors
Eziyo Ehsani, Luca Giamattei, Ivano Malavolta, Roberto Pietrantuono
Abstract
The migration of Large Language Models (LLMs) from cloud clusters to edge devices promises enhanced privacy and offline accessibility, but this transition encounters a harsh reality: the physical constraints of mobile batteries, thermal limits, and, most importantly, memory constraints. To navigate this landscape, we constructed a reproducible experimental pipeline to profile the complex interplay between energy consumption, latency, and quality. Unlike theoretical studies, we captured granular power metrics across eight models ranging from 0.5B to 9B parameters without requiring root access, ensuring our findings reflect realistic user conditions. We harness this pipeline to conduct an empirical case study on a flagship Android device, the Samsung Galaxy S25 Ultra, establishing foundational hypotheses regarding the trade-offs between generation quality, performance, and resource consumption. Our investigation uncovered a counter-intuitive quantization-energy paradox. While modern importance-aware quantization successfully reduces memory footprints to fit larger models into RAM, we found it yields negligible energy savings compared to standard mixed-precision methods. This proves that for battery life, the architecture of the model, not its quantization scheme, is the decisive factor. We further identified that Mixture-of-Experts (MoE) architectures defy the standard size-energy trend, offering the storage capacity of a 7B model while maintaining the lower energy profile of a 1B to 2B model. Finally, an analysis of these multi-objective trade-offs reveals a pragmatic sweet spot of mid-sized models, such as Qwen2.5-3B, that effectively balance response quality with sustainable energy consumption.