Explainability of Recurrent Neural Networks for Enhancing P300-based Brain-Computer Interfaces

2026-05-11Machine Learning

Machine LearningArtificial IntelligenceHuman-Computer Interaction
AI summary

The authors developed a new addition called the Post-Recurrent Module (PRM) to improve brain-computer interface (BCI) systems that detect P300 signals from brain activity. This new module helps the model perform better and makes its decisions easier to understand by showing which brain areas and time points are most important. Their method improved accuracy by 9% compared to previous models and matched known brain science about P300 signals. The framework can also be used beyond P300 detection for other EEG-related tasks.

Brain-Computer Interface (BCI)P300Electroencephalography (EEG)Recurrent Neural Network (RNN)Deep LearningExplainabilitySpatio-temporal AnalysisInter-subject variabilityIntra-subject variabilityEvent-Related Potentials
Authors
Christian Oliva, Vinicio Changoluisa, Francisco B Rodríguez, Luis F Lago-Fernández
Abstract
Brain-Computer Interfaces (BCIs) based on P300 event-related potentials offer promising applications in health, education, and assistive technologies. However, challenges related to inter- and intra-subject variability and the explainability of Deep Learning (DL) models limit their practical deployment. In this work, we present the Post-Recurrent Module (PRM), an additional layer designed to improve both performance and transparency, incorporated into a Recurrent Neural Network (RNN) architecture for classifying P300 signals from EEG data. Our approach enables a dual analysis of spatio-temporal signals through both global and local explainability techniques, allowing us not only to identify the most relevant brain regions and critical time intervals involved in classification, but also to interpret model decisions in terms of spatio-temporal EEG patterns consistent with well-stablished neurophysiological descriptions of the P300. Experimental results show a 9\% improvement in performance over state of the art, while also revealing the importance of inter- and intra-subject variability, in alignment with established neuroscience literature. By making model decisions transparent and efficient, we present a framework for explainable EEG-based models. This framework is not limited to more efficient P300 detection, but can be generalized to a wide range of EEG-based tasks. Its ability to identify key spatial and temporal features makes it suitable for applications such as motor imagery, steady-state visual evoked potentials, and even cognitive workload assessment.