In-Network Artificial Computing Enhanced Light Model-Switching for Emergency Communications Networks
2026-05-11 • Networking and Internet Architecture
Networking and Internet Architecture
AI summaryⓘ
The authors developed a system for emergency communication networks that can quickly switch between different AI models directly inside the network devices. This lets the network handle packets differently without delays caused by slow updates. Their approach uses multiple small, fast neural networks that share the same framework, allowing instant model selection for each packet. They tested their system on common hardware and showed it can process millions of packets per second with very low delay. This demonstrates that their method is practical for real-time network tasks.
in-network computingBinary Neural Networksmodel-switchingeBPFXDPAF_XDPpacket processinglatencyAVX-512emergency communications
Authors
Yuehan Li, Zhiyuan Ren, Tao Zhang, Wenchi Cheng
Abstract
Emergency communications networks require in-network intelligence for timely traffic handling under dynamic demands and runtime constraints. In these environments, packets may need different inference behaviors, and conventional model replacement via control-plane updates is too slow for responsive operation. We propose an in-network artificial computing framework with lightweight model-switching, where multiple Binary Neural Network (BNN) models are kept resident within a shared execution framework. Packet metadata selects the active model at packet granularity with O(1) selection cost. A fixed 1024-byte payload is aligned with x86 AVX-512, enabling efficient memory access. The framework is realized on an eBPF/XDP + AF_XDP stack. Experimental results show that the system sustains 1.894 Mpps with a 0.528 us inference latency, while model selection adds only 0.005 us. Our results demonstrate that different resident models induce distinct packet-processing behaviors, that scaling to 16 slots preserves low switching overhead, and that online model switching completes without wrong-verdict packets. These results show the practicality of lightweight in-network artificial computing on commodity hardware.