Dissecting Jet-Tagger Through Mechanistic Interpretability

2026-05-11Machine Learning

Machine Learning
AI summary

The authors studied how a Particle Transformer neural network, trained to classify jets in physics, makes its decisions by breaking down its internal workings. They found a small group of six attention heads that together perform nearly as well as the full model, with clear roles: one starts the process, some middle ones pass information about particle pairs, and one at the end makes the final call. Their analysis showed the model focuses more on certain energy patterns related to two-part structures inside jets rather than three-part ones. This work shows that interpretation techniques from language models can help understand physics models and that training can uncover meaningful physical features without explicit guidance.

mechanistic interpretabilityParticle Transformerjet classificationattention headsenergy correlator basisN-subjettinesslinear probinggradient descentTop Quark Taggingneural networks
Authors
Saurabh Rai, Sanmay Ganguly
Abstract
Mechanistic interpretability seeks to reverse engineer a trained neural network by identifying the minimal subset of internal components. We perform a mechanistic interpretability analysis of the Particle Transformer architecture, trained on the Top Quark Tagging reference dataset, with the goal of identifying the computational circuit responsible for jet classification and characterizing the physical content of its internal representations. Combining zero ablation, path patching with two complementary on-manifold corruption strategies and linear probing of the residual stream, we identify a sparse six-head circuit that recovers the great majority of the full model performance while admitting a clean source-relay-readout interpretation. In this circuit, a single early layer head serves as the primary causal source, a cluster of middle-layer heads acts as relays selectively attending to hard pairwise substructure and a single late-layer head reads out the aggregated signal. Linear probes show that the residual stream is preferentially aligned with the energy correlator basis over the $N$-subjettiness basis. Within the energy correlator basis, the model preferentially encodes 2-prong substructure observables over the 3-prong observables. A per-layer trained probe further reveals that the apparent single step commitment of the model to a classification decision in the first class attention block is in fact a basis rotation, with the discriminating signal already saturating in the particle attention stack. These results demonstrate that mechanistic interpretability methods developed for natural language models can be used for jet physics classifiers and indicate that gradient descent may rediscover physically meaningful aspects of jet tagging without supervision.