Lakestream: A Consistent and Brokerless Data Plane for Large Foundation Model Training

2026-05-11Distributed, Parallel, and Cluster Computing

Distributed, Parallel, and Cluster ComputingMachine Learning
AI summary

The authors describe Lakestream, a new system designed to improve how training data is handled during large-scale AI model training. Unlike existing systems, Lakestream uses a new method called the Transactional Global Batch, which ensures data consistency and smooth coordination across many training steps. It also manages data recovery and cleanup directly within storage, making the system more reliable. Their approach avoids the need for a central broker and maintains high data throughput with minimal delays, outperforming popular tools like Apache Kafka in tests with many GPUs.

Large Foundation ModelsBatch SemanticsLakehouse StorageACID TransactionsDistributed TrainingCheckpointingData IngestionBrokerless ArchitectureApache KafkaThroughput
Authors
Ting Sun, Junjie Zhang, Xiao Yan, Songxin Zhang, Zhuoyang Song, Jingyi Xi, Zunyao Mao, Bingyi Jing, Jiaxing Zhang, Zejian Xie
Abstract
Modern Large Foundation Model (LFM) training has transformed the data pipeline from a static ingestion layer into a dynamic component that must co-evolve with the training process. Existing systems are ill-equipped: colocated dataloaders offer no failure isolation, while message queue-based disaggregated dataloaders operate on a record/offset abstraction that cannot express the batch-level semantics required by distributed training. We present Lakestream, a brokerless, object-store-native training data plane with three key properties. First, it introduces the Transactional Global Batch (TGB), which builds on lakehouse-style ACID storage semantics and extends them with training-specific consistency, including atomic all-rank batch visibility, a globally ordered step sequence, checkpoint-aligned lifecycle management, and end-to-end exactly-once recovery. Second, it realizes recovery and retention directly in the storage layer, by inlining producer state in the manifest and tying reclamation to distributed checkpoint state. Third, its Decentralized Adaptive Commit (DAC) algorithm sustains stable ingestion throughput as the manifest grows, without any inter-producer communication. Evaluations on large-scale multimodal pre-training and SFT workloads using 64 GPUs show that Lakestream outperforms colocated dataloader throughput while providing full failure isolation, outperforms Apache Kafka in ingestion throughput, and achieves lower consumer read latency than Kafka.