LLM as Clinical Graph Structure Refiner: Enhancing Representation Learning in EEG Seizure Diagnosis

2026-04-30Artificial Intelligence

Artificial Intelligence
AI summary

The authors studied how to improve seizure detection from EEG signals, which are often noisy and hard to interpret. They found that current methods to build graphs from EEG data can create many unnecessary or wrong connections, making the detection less accurate. To fix this, they used large language models (LLMs) to review and clean up these graph connections, helping remove mistakes. Their experiments showed that this approach improved seizure detection and made the graph data clearer and easier to understand.

EEGseizure detectiongraph representationlarge language modelsedge refinementTransformermultilayer perceptronTUSZ datasetsignal noisegraph learning
Authors
Lincan Li, Zheng Chen, Yushun Dong
Abstract
Electroencephalogram (EEG) signals are vital for automated seizure detection, but their inherent noise makes robust representation learning challenging. Existing graph construction methods, whether correlation-based or learning-based, often generate redundant or irrelevant edges due to the noisy nature of EEG data. This significantly impairs the quality of graph representation and limits downstream task performance. Motivated by the remarkable reasoning and contextual understanding capabilities of large language models (LLMs), we explore the idea of using LLMs as graph edge refiners. Specifically, we propose a two-stage framework: we first verify that LLM-based edge refinement can effectively identify and remove redundant connections, leading to significant improvements in seizure detection accuracy and more meaningful graph structures. Building on this insight, we further develop a robust solution where the initial graph is constructed using a Transformer-based edge predictor and multilayer perceptron, assigning probability scores to potential edges and applying a threshold to determine their existence. The LLM then acts as an edge set refiner, making informed decisions based on both textual and statistical features of node pairs to validate the remaining connections. Extensive experiments on TUSZ dataset demonstrate that our LLM-refined graph learning framework not only enhances task performance but also yields cleaner and more interpretable graph representations.