When Is the Same Model Not the Same Service? A Measurement Study of Hosted Open-Weight LLM APIs
2026-05-04 • Performance
Performance
AI summaryⓘ
The authors studied how open-weight large language models (LLMs) are used in real-world services rather than just being downloadable files. They found that most usage is focused on a few popular models, but older versions still get some use. They also noticed that just because a provider offers a model doesn’t mean many people actually use it, and prices tend to stay stable even when performance varies. Finally, they showed that choosing which provider and model to use depends on the specific task and context, and this can significantly improve costs and speed. Overall, the authors suggest looking at deploying these models as a dynamic system with many factors, not just a list of model features.
open-weight large language modelsmodel deploymentservice layerdemand concentrationprovider heterogeneitylatencythroughputtask-conditioned routingAPI hostingstatistical decision problem
Authors
Haorui Li, Zhenghui He, Xuanzi Liu, Yang Xu, Dongsheng Liu, Jiakang Ma, Lupan Wu, Yangjie Wu, Xiongchao Tang, Tianhui Shi
Abstract
Open-weight large language models (LLMs) are often described as downloadable model artifacts, but in production they are increasingly consumed as hosted APIs. This paper studies the intermediary service layer that turns a model release into an operational endpoint. Using sampled request logs, provider metadata, compatibility probes, pricing snapshots, and continuous latency measurements collected by AI Ping during Q4 2025, we analyze demand concentration, provider heterogeneity, and task-conditioned routing for popular open-weight model families. The first empirical pattern is concentration with inertia: among the model families displayed in the public aggregate, the largest family carries 32.0% of relative demand and the top five carry 87.4%, with a Gini coefficient of 0.693, yet older versions remain active after newer releases. The second pattern is a separation between supply and use: broad provider listing of a model does not imply realized adoption, and listed prices are more anchored than latency, throughput, context length, protocol support, and error semantics. The third pattern is conditionality: applications induce different token-length regimes, so the relevant service object is not a model name but a provider-model-task-time tuple under protocol and context constraints. In two representative counterfactuals, routing lowers Qwen3-32B cost by 37.8% and raises DeepSeek-V3.2 average throughput by about 90% relative to direct official access. These results suggest that open-weight LLM deployment should be studied as a constrained statistical decision problem over a heterogeneous service layer, rather than as a static catalog of model capabilities.