IEEE Access, cilt.13, ss.143291-143301, 2025 (SCI-Expanded)
Brain-computer interfaces (BCIs) offer promising solutions for assisting individuals with disabilities, supporting neurorehabilitation, and enhancing human capabilities. However, the limited decoding accuracy of EEG-based motor imagery (MI) signals poses a major challenge for the practical deployment of BCI systems. A common approach involves using signals from opposite hemispheres to boost classification accuracy. Yet, to control devices such as robotic hands or prosthetics in a human-like manner, it is essential to accurately classify hand opening and closing tasks using EEG signals from the same motor cortex region. This study introduces TransformerNet, a novel deep learning architecture specifically designed to classify hand open-close MI tasks from the same brain region. The task is particularly challenging due to the high similarity and overlapping nature of the EEG signals. TransformerNet combines a convolutional module, inspired by EEGNet, to extract local spatial features, with a Transformer encoder that captures long-range temporal dependencies. Furthermore, a channel attention mechanism enhances the model's ability to focus on the most informative features. In experimental evaluations, TransformerNet achieved an average classification accuracy of 85.97%, outperforming traditional deep learning methods. The model effectively captures high-level temporal-spectral patterns and uncovers hidden dependencies within the EEG signals. These results demonstrate the potential of integrating attention mechanisms with Transformer-based architectures to improve MI-based BCI performance. This advancement holds promise for real-world applications such as brain-controlled prosthetics, assistive devices, and human-computer interaction, moving BCI technologies closer to practical and reliable implementation.