Merlin: Deterministic Byte-Exact Deduplication for Lossless Context Optimization in Large Language Model Inference

2026-05-11Computation and Language

Computation and Language
AI summary

The authors present Merlin, a tool that quickly finds and removes repeated pieces of text in large sets of data, making processing faster and more efficient. It uses smart hashing and memory techniques to compare data very fast without losing any information. Merlin is especially useful for systems that work with large language models, helping reduce unnecessary data by up to 71% in some cases. The authors also explain how Merlin can be safely integrated into different development tools and workflows without network risks.

deduplicationhashingxxHash3-64SIMDlarge language modelsRetrieval-Augmented GenerationModel Context Protocoldata pipelinesopen-addressingtext processing
Authors
Sietse Schelpe
Abstract
Data-intensive applications, ranging from large-scale retrieval systems to advanced data pipelines, are increasingly bottlenecked by the processing of highly redundant text corpora. We present Merlin, a local-first, agnostic, high-throughput deduplication and context optimization engine designed to mitigate these inefficiencies. Utilizing a highly optimized, SIMD-friendly open-addressing flat hash set combined with xxHash3-64, Merlin performs rapid, byte-exact deduplication of text passages and data chunks. While broadly applicable to any text-processing workflow, its impact is particularly pronounced in Large Language Model (LLM) ecosystems, such as Retrieval-Augmented Generation (RAG). Our empirical evaluations demonstrate an input reduction ranging from 13.9% in low-redundancy datasets to over 71% in high-redundancy pipelines, maintaining absolute data fidelity. Furthermore, we detail the system's integration architecture via the Model Context Protocol (MCP), enabling secure, zero-network-interception deployment across major IDEs and autonomous agents. This paper outlines the core algorithmic design, performance benchmarks, and the architectural principles required to process data at sustained speeds of up to 8.7 GB/s.