Optimizing Seizure Detection: A Comparative Study of SVM, CNN, and RNN-LSTM

Optimizing Seizure Detection: A Comparative Study of SVM, CNN, and RNN-LSTM

Sakshi Kumari* Vijay Khare Parul Arora

Department of Electronics and Communication Engineering, Jaypee Institute of Information Technology, Noida 201309, India

Corresponding Author Email: 
sakshichauhan20466@gmail.com
Page: 
369-378
|
DOI: 
https://doi.org/10.18280/ijcmem.120405
Received: 
21 October 2024
|
Revised: 
26 November 2024
|
Accepted: 
7 December 2024
|
Available online: 
27 December 2024
| Citation

© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Epilepsy seizures are complex neurological phenomena marked by recurrent and unpredictable seizures that can greatly affect an individual’s quality of life. It affects millions of people worldwide. The exact and timely detection of epileptic seizures is crucial in the management and treatment of epilepsy. Many methods have been put forth recently for the diagnosis of epileptic seizures using magnetic resonance imaging (MRI) and electroencephalography (EEG). This work focuses on using deep learning and machine learning techniques, such as Support Vector Machines (SVMs), Recurrent Neural Networks (RNNs), and Convolutional Neural Networks (CNNs), to automatically identify epileptic seizures. These techniques have shown promising results in a variety of fields, including time series data processing and medical image analysis. In this work, we present a unique method for detecting epileptic seizures using electroencephalogram (EEG) data by comparing the outcomes of three deep learning architectures: SVM, CNN, and RNN-LSTM (Long-short term memory). The experimental results demonstrate that the SVM, CNN and RNN-LSTM models exhibit promising performance in detecting epileptic seizures from EEG data.

Keywords: 

epilepsy seizures, convolutional neural network (CNN), support vector machine (SVM), recurrent neural network (RNN), electroencephalogram (EEG)

1. Introduction

Millions of people worldwide suffer from epilepsy, a neurological condition that is characterized by recurring, spontaneous seizures. Most of the time it becomes life threatening if we do not detect it at the initial stage. Accuracy and timely detection of epileptic seizures is necessary in improving patient outcomes, enabling effective treatment strategies, and ensuring the safety and well-being of individuals living with epilepsy. Electroencephalography (EEG) is a crucial tool for monitoring and diagnosing epilepsy as it records the brain's electrical activity. Various techniques exist for extracting features and classifying epileptic seizures from EEG data [1, 2].

Machine learning, hybrid [3] and deep learning techniques serve remarkable results in automatic detection of epileptic seizures. Recently, deep learning and machine learning algorithms have shown great potential in numerous fields, including healthcare [4-9]. Specifically, in deep learning RNN-LSTM [10, 11], CNN [12-14] and GAN (Generative Adversarial Network) [15] and in machine learning random forest [16], ensemble learning [17], and SVM [18-20] have been successful in processing sequential and spatial data, respectively. For preprocessing of data too many transform techniques were used. Wavelet transform gives more accurate results for preprocessing, it's used for eliminating the feature extraction part in preprocessing [20, 21].

In the proposed work we use SVM, RNN-LSTM, and CNN algorithms to build a reliable and accurate seizure detection system for epilepsy patients utilizing EEG datasets, using the power of deep learning. The problems we are trying to solve in this proposed work are to try to increase accuracy, compare the algorithms result on a large amount of dataset. We are using the dataset of large duration EEG signals which consist of the data points before the seizure, during the seizure and after seizure.

The key contributions of our work include a comprehensive comparison of SVM, RNN-LSTM, and CNN models for epileptic seizure detection, providing a thorough evaluation of their respective strengths and weaknesses. By applying these models separately, we highlight their individual performance metrics and identify the most effective approach for different aspects of EEG signal analysis. Our work distinguishes itself by not only focusing on a single model but by offering detailed insights into each model's efficacy, ultimately directing subsequent studies and medical uses of seizure detection. Additionally, our model's capability to process large-duration EEG signals and accurately capture the pre-seizure, seizure, and post-seizure phases distinguishes it from existing methods, providing a comprehensive solution for real-time monitoring of epilepsy.

By comparing multiple models, we gain a deeper understanding of their strengths and weaknesses in the context of EEG-based seizure detection. This comparative analysis allows us to identify the most suitable model for specific applications, such as real-time monitoring or clinical diagnosis. Additionally, it provides insights into the underlying mechanisms of each model, which can inform future research and development of more advanced seizure detection techniques.

The primary focus of this research paper is to design a Deep Neural Network model aimed at enhancing the sensitivity of seizure detection using EEG signals. In order to comprehend the scope of the problem that will be addressed in this investigation, we first begin with the exploration provocation, which includes a brief summary of the issues and the basic knowledge of epilepsy that is required. Further, the paper concludes with results of the conducted experiment along with its methodology and future outcomes.

2. Literature Survey

Epilepsy seizure detection is an important task in the area of medical diagnostics, as it helps in the timely identification and management of epileptic seizures. Various researchers have made significant contributions to this field by utilizing SVM and deep learning algorithms.

Developing automatic systems for detecting epileptic seizures has been a critical area of research focused on enhancing the diagnosis and treatment of epilepsy, a condition marked by recurrent seizures. Numerous contributions have been made in this field, utilizing various techniques and approaches to accurately detect and classify epileptic seizures.

A method for automatically detecting seizures using electroencephalogram (EEG) signals was proposed [22]. The approach involved extracting multi-domain features, including statistical, time-frequency, and nonlinear measures, from EEG data. The machine learning classifiers then used these attributes as inputs to differentiate seizure and non-seizure segments. The method achieved high accuracy in seizure detection and showed promise in real-time applications.

Truong et al. [23] focused on seizure prediction using a deep learning method based on CNN. The study utilized both intracranial EEG and scalp EEG data to capture localized and global patterns associated with seizures. The CNN model was trained to learn discriminative characteristics from the EEG signals and predict the occurrence of seizures. The presented methodology achieved competitive output in seizure prediction, highlighting the effectiveness of deep learning techniques.

In 2018, Tsiouris et al. [24] investigated the use of deep learning methods, particularly Long Short-Term Memory (LSTM) networks, for the identification of epileptic seizures. The study employed preprocessed EEG signals as inputs to LSTM models, which captured the temporal dependencies in the data. The models were trained to classify segments of EEG signals as either seizure or non-seizure. The experimental outcomes highlighted the accuracy and potential of LSTM networks in seizure detection.

A method for automatic seizure detection that combines Random Forests (RF) and wavelet transform was proposed [25]. EEG data were broken down into various frequency bands using the wavelet transform, and these bands were then utilized as features. These attributes served as the training set for the RF classifier, which classified seizure and non-seizure segments. The method demonstrated the efficacy of wavelet-based feature extraction and RF classification by achieving good seizure detection performance.

Ma et al. [26] presented a deep learning technique for seizure detection using LSTM networks enhanced with an attention mechanism. The LSTM network was designed to recognize temporal dependencies in EEG signals, while the attention mechanism emphasized relevant segments for seizure detection. This approach achieved high accuracy in detecting seizures and outperformed traditional machine learning algorithms, demonstrating the capabilities of attention mechanisms and deep learning in this domain.

Maia et al. [27] presented the use of multimodal signals, specifically Electroencephalogram (EEG) and Electrocardiogram (ECG), for seizure detection. The study combined features extracted from both modalities and employed deep learning techniques, such as CNN and LSTM, for classification.

According to all the models used we are using SVM, CNN and RNN-LSTM algorithms for seizure detection. The selection of SVM, CNN, and RNN-LSTM models for this study was driven by their unique strengths and suitability for EEG data analysis and epileptic seizure detection. SVM is robust in handling high-dimensional data and effective in binary classification tasks, making it a reliable baseline model. CNN excels at automatic feature extraction and capturing spatial hierarchies in the data, particularly beneficial for identifying spatial patterns in EEG signals. RNN-LSTM is proficient in modeling temporal dependencies in sequential data, such as EEG time series, by retaining important information over extended periods. However, they also present challenges: SVM with complex datasets, CNN’s computational demands, and RNN-LSTM’s tendency to overfit. These complementary strengths and weaknesses provide a comprehensive evaluation of EEG-based seizure detection of epilepsy.

3. Methods and Materials

The suggested methodology for this study is as shown in Figure 1.

Figure 1. Flowchart of proposed methodology

3.1 Dataset acquisition

We used the CHB-MIT dataset [28] for Epilepsy seizure detection on three modules of deep learning SVM, RNN-LSTM, and CNN.

Here are some of the features of the CHB-MIT dataset:

  • 23 instances of EEG recordings from 22 young patients aged 1.5 to 22 years with uncontrollable seizures are included in the collection.

  • It contains 916 hours of EEG recordings. These recordings were collected after patients stopped taking anti-seizure medication.

  • The dataset is divided into seizure and non-seizure segments, with a total of 664 EEG files. Out of these, 198 files contain seizures, providing a rich source of data for studying epileptic activity.

  • Each EEG file can be either one hour or four hours long.

  • Each case contains one or more EEG recordings, each of which is a time series of 256 EEG channels, and all signals are sampled at 256Hz with 16-bit resolution.

  • The EEG channels are recorded from different locations at the scalp, and they represent different electrical activity in the brain.

  • The onsets and ends of seizures in the CHB-MIT dataset are annotated by expert epileptologists.

The dataset is available in the public domain and is available for download from the PhysioNet website.

The CHB-MIT dataset is a useful tool for scientists investigating epilepsy seizure detection and prediction. The dataset is large and diverse, and it includes a variety of seizure types. This makes it ideal for training and evaluating new seizure detection algorithms.

3.2 Load dataset

This dataset csv file contains 23 columns and 2560 rows. It has EEG of larger duration, sample values of the dataset are shown in Figure 2.

Figure 2. CHB-MIT data of epilepsy seizures in csv file format

3.3 Preprocessing

The preprocessing of EEG signals is a critical step before feeding the data into machine learning models. In this study, EEG data preprocessing involves several key steps to ensure the dataset is suitable for training the SVM, CNN, and RNN-LSTM models. In the preprocessing steps include data loading, normalization, and reshaping, as well as class balancing for the SVM model.

Initially, the EEG data was loaded from the CHB-MIT dataset using the pandas library. The dataset was inspected to identify and handle any missing values or anomalies. The raw EEG signals were extracted from specific channels, such as 'C3-P3' and 'F8-T8'. Select the column labeled 'C3-P3' from the DataFrame and store it in the variable 'X'. The data in 'X' is reshaped into a 2D array. Next, it selects the column labeled 'T8-P8.1' from dataframe and stores it in another variable. Then, use the `make_classification` function, to generate a synthetic classification dataset. It creates 1000 samples with a class imbalance of 90% for one class.

To achieve reliable model evaluation, the CHB-MIT EEG dataset was carefully divided into training, validation, and testing sets for the experimental setting of this investigation. The generated dataset is then separated into training, validation and testing sets using the 'train_test_split' method from 'sklearn.model_selection'. The training set was used to train the models, the validation set was used to tune hyperparameters and monitor overfitting, and the test set was used to evaluate the final performance of the models. The split ratios were approximately 80% for training, 10% for validation, and 10% for testing.

A linear kernel with the regularization parameter C set to 1.0 was utilized for the SVM model. Three convolutional layers, with 2 max-pooling layers, and a fully connected layer with ReLU activation made up the CNN model's configuration. To avoid overfitting, the RNN-LSTM model has two LSTM layers with 56 and 50 units and a 0.03 dropout rate. The Adam optimizer was used to train CNN and RNN-LSTM models, with a learning rate of 0.001. TensorFlow 2.4.1 and Python 3.8 were used in the experiments on a system that has an NVIDIA GTX 1080 GPU, an Intel i7 processor, and 16GB of RAM. These specifics guarantee the dependability and reproducibility of the findings from this investigation.

Below is the signal representation after the removal of noise at the time of preprocessing of the EEG signals in Figure 3.

Figure 3. Epileptical signals post preprocessing

3.4 Normalize the input features and compile the model

The model rescales the input features using MinMaxScaler to maintain values between 0 and 1. This step is crucial for consistent and dependable model training. Subsequently, it reshapes the standardized input features into a 3D array to align with the expected input format.

It compiles the model using the 'compile' method. It designates the loss function as 'categorical_crossentropy,' commonly used for multiclass classification. The optimization method 'adam' is chosen, and the evaluation metric is set to 'accuracy'. Next, the number of unique classes found in the training labels 'y_train' is assigned to the variable 'num_classes'. To do this, the 'np.unique' function is used. Then, using the 'np_utils.to_categorical' function, the labels 'y_train' and 'y_test' are transformed into one-hot encoded vectors. 'num_classes' is the requested number of classes. Next, the 'fit' approach is used to train the model. Standardizing the training data 'X_train' entails deducting the mean and dividing the result by the standard deviation. Similarly, the testing data 'X_test' is standardized as well. The standardized data alongside the corresponding labels are passed as the training and validation data, respectively.

3.5 SVM

An algorithm for supervised machine learning is SVM. SVM is an effective machine learning method that is used in both classification and regression applications. In essence, SVM looks for the perfect hyperplane in the feature space that can split many classes. By choosing the hyperplane, the margin-the distance between it and the closest data points from each class-is maximized. The set of data points that are closest together is known as a support vector.

The way it operates is by locating the feature space hyperplane that best divides the classes.

  • Architecture: We use a linear SVM for its simplicity and effectiveness in high-dimensional spaces.

  • Feature Extraction: We extracted statistical features from the raw EEG signal, such as mean, standard deviation, skewness, and kurtosis. These features capture the overall characteristics of the EEG signal and can be used to discriminate between seizure and non-seizure segments.

  • Training Process: The EEG data is preprocessed and transformed into feature vectors. These feature vectors are then fed into the SVM for training.

In this work, we used a support vector classifier to normalize and classify the data by generating hyperplanes. We initialize an SVM classifier with a linear kernel and class weights to handle class imbalance. Balance the dataset using resampling techniques to ensure the model is tested and trained on a balanced data. Split the data into training and testing sets as in the ratio 80:20. Model evaluation was conducted through 30-fold cross-validation to assess accuracy and loss metrics.

The decision boundary for a linear SVM is given by Eq. (1):

y=w*x+b                      (1)

where, y is the output value (class prediction); w is the weight vector; b is the bias term; and x is the input feature vector.

Figure 4 depicts the training data, with each point representing a sample. The two classes are clearly separable, as evidenced by the distinct clusters.

Figure 4. Classification in SVM

The detailed architecture of the SVM model employed for the classification of epileptic seizure and non-seizure EEG signals shown in Figure 5.

Figure 5. SVM architecture for epileptic seizure detection

3.6 CNN

CNN is a part of deep learning models that were created primarily for processing structured grid-like data, such as images, audio spectrograms, and even text. They use fully connected layers for global pattern recognition, convolutional layers for extracting local patterns, and pooling layers for downsampling and aggregating data. CNNs have revolutionized the field of computer vision and are widely used for various tasks, offering robustness, scalability, and the ability to learn hierarchical representations.

The convolution operation, a core component of CNNs, is mathematically defined as Eq. (2):

$(f * g)(t)=\int_{-\infty}^{+\infty} f(\tau) g(t-\tau) d \tau$            (2)

Figure 6. CNN architecture

In discrete terms for CNN, this operation can be represented as Eq. (3):

$(f * g)[n]=\sum_{m=-\infty}^{+\infty} f[m] g[n-m]$              (3)

In this context, f represents the input signal, g represents the convolutional filter, and the resulting sum (f * g)[n] represents the convolution of the input signal with the filter at a particular point. This process enables CNNs to extract patterns and features from the EEG data, facilitating accurate signal classification. Figure 6 presents the architecture of the CNN model used in this proposed work.

In this proposed method we applied:

  • 3 convolutional layers, in first layer we used 32 filters, in second 64 filters, and in third 128 filters of kernel size 3$\times$3 and the activation function we used is ReLu.

  • We take 2 max pooling layers of 2$\times$2 pool size after each convolutional layer to reduce dimensionality and capture important features.

  • Afterwards, a flatten layer was used to convert the final convolutional layer's output into a 1D vector.

  • We employ two fully-connected dense layers:

         > 256 neurons with a ReLU activation function are found in the first dense layer.

         > To avoid overfitting, a dropout layer with a rate of 0.5 is inserted after this dense layer.

         > 128 neurons in the second dense layer have a ReLU activation function.

         > This dense layer is followed by another dropout layer with a rate of 0.5.

         > Output layer has a number of neurons equal to the number of classes according to classification problems.

The softmax activation function is used for multi-class classification

3.7 RNN-LSTM

A kind of RNN architecture called LSTM solves the vanishing gradient issue and makes it possible to describe long-term relationships in sequential data more accurately. The core idea of LSTM is the incorporation of a memory cell, which allows the network to learn and store information over extended periods of time. The memory cell updates and retrieves data selectively, operating as an information highway that traverses time steps. Because of this, LSTM is able to capture long-term relationships by reducing the vanishing gradient issue that standard RNNs frequently face.

The LSTM architecture comprises several key components and their interactions shown in Figure 7. These components and their corresponding equations are outlined in Eq. (4) to Eq. (9):

$f_t=\sigma\left(W_f *\left[h_{t-l}, x_t\right]+b_f\right)$             (4)

$i_t=\sigma\left(W_i *\left[h_{t-l}, x_t\right]+b_i\right)$             (5)

$\tilde{C}_t=\tanh \tanh \left(W_c *\left[h_{t-1}, x_t\right]+b_c\right)$             (6)

$C_t=f_t^* C_{t-1}+i_t^* * \tilde{C}_t$             (7)

$O_t=\sigma\left(W_o *\left[h_{t-1}, x_t\right]+b_o\right)$             (8)

$h_t=O_{t *} \tanh \tanh \left(C_t\right)$             (9)

where, Ot is the output gate, ht is the hidden state, it is the input gate, xt is the input, $\tilde{C}_t$ is the candidate cell state, Ct is the cell state, and ft is the forget gate. All of these are at time step t.

The input gate, forget gate, cell state update, and output gate are represented by the weight matrices Wi, Wf, Wc, and Wo, respectively. Similarly, input gate, the forget gate, cell state update, and output gate are connected to the bias terms bf, bi, bc, and bo, respectively. tanh represents the hyperbolic tangent activation function, while σ represents the sigmoid activation function.

The model's input layer begins with raw EEG data collected and stored in a CSV file, which serves as the input. In the data preprocessing stage, this raw data is reshaped to be compatible with the LSTM layers.

The LSTM layers consist of two key components. The reshaped input data is processed by LSTM Layer 1, which has 56 units. To avoid overfitting, Dropout Layer 1, which has a dropout rate of 0.03, comes next. Next, LSTM Layer 2 with 50 units further processes the data, followed by Dropout Layer 2, also with a 0.03 dropout rate.

The 'tanh' activation function and a Dense Layer consisting of 20 units are among the Dense Layers that aid in preparing the data for the final classification. The output layer, which provides the model's final output, classifies classes using softmax activation.

In the Output stage, the model produces the seizure detection results, indicating whether a seizure is detected or not.

Figure 7. RNN-LSTM model architecture for epileptic seizure detection

4. Results

In this research we implement three different models on the same CHB-MIT dataset. All these three models give different results.

These are the confusion matrix, which shows the true positive, true negative, false positive, and false negative counts to graphically illustrate the classification results, used to assess each model performance.

The confusion matrix of SVM, CNN, LSTM modules are shown in Figures 8-10, respectively.

SVM Model: Accuracy=87.5%

CNN Model: Validation dataset accuracy=95.89%; Training dataset accuracy=90.91%

RNN-LSTM Model: Validation dataset accuracy=97.85%; Training dataset accuracy=99.9%

Figure 8. Confusion matrix for SVM

Figure 9. Confusion matrix for CNN

Figure 10. Confusion matrix for RNN-LSTM

So, in all the three methods LSTM model gives high accuracy and more effective results.

Some of the other parameters to measure the performance of the modal are given in the below Table 1.

  • Precision: Indicates the percentage of real positive detections among all positive detections, or the number of seizure episodes that were really identified.

  • Recall (Sensitivity): Shows how well the model detects genuine seizure occurrences by expressing the percentage of actual seizure events that it accurately identified.

  • F1-Score: It is a balanced indicator of the model's performance in identifying seizures that takes into account both false positives and false negatives. It is calculated as the harmonic mean of precision and recall.

  • Specificity: Shows the ability to prevent false alarms in non-seizure instances by reflecting the percentage of real non-seizure occurrences that the model properly detected.

Based on these metrics, LSTM-RNN performs better than SVM and CNN for accuracy, recall, and F1-score, indicating that it is a more useful model for epileptic seizure identification on the provided dataset.

The SVM model performs 30-fold cross-validation to evaluate model generalization. Plots accuracy and loss scores across folds to visualize performance of SVM model shown below in Figure 11. According to the graph we observed that accuracy generally increases as the model trains on more data and loss generally decreases as the model learns. Some variability in performance across folds is expected due to random sampling. Cross-validation provides a robust estimate of model performance.

Table 1. Performance parameters

Modals

Precision

F1-Score

Recall

Specificity

SVM

0.54

0.55

0.56

0.92

CNN

0.73

0.74

0.75

0.98

LSTM-RNN

0.98

0.86

0.88

0.99

Figure 11. Cross validation accuracy and loss for SVM

Training and Validation loss and accuracy graph of CNN and RNN-LSTM over epochs (CNN on 200 epochs and RNN on 700 epochs) shown in Figure 12 and Figure 13 respectively. It shows how loss and accuracy change during training and validation. We observed that accuracy generally increases as the model trains on more data and loss generally decreases as the model learns.

The performance of a classification model is represented graphically by the ROC (Receiver Operating Characteristic) curve. At different threshold values, it shows the genuine positive rate (sensitivity) versus the false positive rate (1-specificity). The capacity of the model to discriminate between seizure and non-seizure events is demonstrated by the ROC curve, which also shows the trade-off between sensitivity and specificity.

(a)

(b)

Figure 12. Training loss and accuracy graph for (a) CNN (b) RNN

(a)

(b)

Figure 13. Validation loss and accuracy graph for (a) CNN (b) RNN

The total capacity of the model to distinguish between classes is measured by the AUC (Area Under the ROC Curve). The range of an AUC value is 0 to 1, where 1 denotes perfect discrimination and 0.5 denotes no better performance than arbitrary guesswork. An extensive assessment of the model's performance across various threshold settings may be obtained by looking at the AUC value, which indicates how well the model performs in differentiating between seizure and non-seizure events.

(a)

(b)

(c)

Figure 14. ROC Curves for (a) SVM, (b) CNN, (c) RNN-LSTM

In Figure 14 there are the ROC curves of each model. The ROC curves and AUC values were calculated on the test set. The AUC also mentioned that which is high for RNN-LSTM models, shows that RNN-LSTM model performance is best.

5. Conclusion

In this work, we examined the effectiveness of SVM, CNN, and RNN-LSTM models for the EEG based seizure detection. We conducted a comparative analysis of these models using a carefully curated and preprocessed EEG dataset. While the results from each model varied, the LSTM model showed the most accuracy and efficacy.

The SVM model demonstrated an accuracy of 87.5% in classifying seizures based on EEG signals. The LSTM got a validation dataset accuracy of 97.85% and a perfect training dataset accuracy of 99.9%. With a validation dataset accuracy of 95.89% and a training dataset accuracy of 90.91%, the CNN model also demonstrated strong performance. This suggests that the RNN-LSTM model is particularly well-suited for capturing temporal dependencies in sequential data, such as EEG signals. It can effectively learn long-term patterns and relationships between different time steps, which is crucial for detecting subtle changes in EEG patterns that may precede or accompany a seizure. In contrast, CNNs are better at capturing spatial patterns, and SVMs are more suitable for simpler classification tasks.

Future research can focus on further optimizing and fine-tuning the LSTM model to improve its performance. Furthermore, investigating ensemble approaches or hybrid models that integrate the advantages of many algorithms may improve the overall precision and robustness of seizure detection systems.

6. Future Scope

The paper can be significantly enhanced by discussing potential improvements and future extensions of the work.

  • Dataset Limitations: The CHB-MIT dataset, while widely used, may not fully capture the diversity of seizure types and patient populations.

  • Model Complexity: Deep learning models like RNN-LSTM can be computationally expensive to train and deploy, especially for real-time applications.

  • Feature Engineering: The performance of the models may be influenced by the choice of features and preprocessing techniques.

  • Multimodal Data: One promising direction is the integration of multimodal data, combining EEG with other physiological signals such as ECG or EMG, to provide a more comprehensive understanding of seizure activity.

These limitations can be addressed in future work by exploring larger and more diverse datasets, developing more efficient model architectures, and investigating advanced feature engineering techniques.

These approaches could lead to more robust and generalizable models, advancing the field of EEG-based seizure detection.

  References

[1] Nandy, A., Alahe, M.A., Uddin, S.N., Alam, S., Nahid, A.A., Awal, M.A. (2019). Feature extraction and classification of EEG signals for seizure detection. In 2019 International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), pp. 480-485. https://doi.org/10.1109/ICREST.2019.8644337

[2] Andrzejak, R.G., Lehnertz, K., Mormann, F., Rieke, C., David, P., Elger, C.E. (2001). Indications of nonlinear deterministic and finite-dimensional structures in time series of brain electrical activity: Dependence on recording region and brain state. Physical Review E, 64(6): 061907. https://doi.org/10.1103/PhysRevE.64.061907

[3] Subasi, A., Kevric, J., Abdullah Canbaz, M. (2019). Epileptic seizure detection using hybrid machine learning methods. Neural Computing and Applications, 31: 317-325. https://doi.org/10.1007/s00521-017-3003-y

[4] Li, M., Jiang, Y., Zhang, Y., Zhu, H. (2023). Medical image analysis using deep learning algorithms. Frontiers in Public Health, 11: 1273253. https://doi.org/10.3389/fpubh.2023.1273253

[5] Ravì, D., Wong, C., Deligianni, F., Berthelot, M., Andreu-Perez, J., Lo, B., Yang, G.Z. (2016). Deep learning for health informatics. IEEE Journal of Biomedical and Health Informatics, 21(1): 4-21. https://doi.org/10.1109/JBHI.2016.2636665

[6] Agarwal, R., Choudhury, T., Ahuja, N.J., Sarkar, T. (2023). IndianFoodNet: Detecting Indian food items using deep learning. International Journal of Computational Methods and Experimental Measurements, 11(4): 221-232. https://doi.org/10.18280/ijcmem.110403

[7] Zhou, M., Tian, C., Cao, R., Wang, B., Niu, Y., Hu, T., Guo, H., Xiang, J. (2018). Epileptic seizure detection based on EEG signals and CNN. Frontiers in Neuroinformatics, 12: 95. https://doi.org/10.3389/fninf.2018.00095

[8] Khan, I.U., Ullah, M., Tripathi, S., Sahu, M., Zeb, A., Faiza,  Kumar, A. (2024). Machine learning for Markov modeling of COVID-19 dynamics concerning air quality index, PM-2.5, NO2, PM-10, and O3. International Journal of Computational Methods and Experimental Measurements, 12(2): 121-134. https://doi.org/10.18280/ijcmem.120202

[9] Hussein, R., Palangi, H., Ward, R., Wang, Z.J. (2018). Epileptic seizure detection: A deep learning approach. arXiv Preprint arXiv: 1803.09848. https://doi.org/10.48550/arXiv.1803.09848

[10] Mandal, S., Singh, B.K., Thakur, K. (2022). Epileptic seizure detection using deep learning based long short-term memory networks and time-frequency analysis: A comparative investigation in machine learning paradigm. Brazilian Archives of Biology and Technology, 65: e22210559. https://doi.org/10.1590/1678-4324-2022210559

[11] Wu, X., Yang, Z., Zhang, T., Zhang, L., Qiao, L. (2023). An end-to-end seizure prediction approach using long short-term memory network. Frontiers in Human Neuroscience, 17: 1187794. https://doi.org/10.3389/fnhum.2023.1187794

[12] Raeisi, K., Khazaei, M., Croce, P., Tamburro, G., Comani, S., Zappasodi, F. (2022). A graph convolutional neural network for the automated detection of seizures in the neonatal EEG. Computer Methods and Programs in Biomedicine, 222: 106950. https://doi.org/10.1016/j.cmpb.2022.106950

[13] Huang, C., Chen, W., Cao, G. (2019). Automatic epileptic seizure detection via attention-based CNN-BiRNN. In 2019 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), San Diego, CA, USA, pp. 660-663. https://doi.org/10.1109/BIBM47256.2019.8983420

[14] Wei, X., Zhou, L., Chen, Z., Zhang, L., Zhou, Y. (2018). Automatic seizure detection using three-dimensional CNN based on multi-channel EEG. BMC Medical Informatics and Decision Making, 18: 71-80. https://doi.org/10.1186/s12911-018-0693-8

[15] Truong, N.D., Zhou, L., Kavehei, O. (2019). Semi-supervised seizure prediction with generative adversarial networks. In 2019 41st Annual International Conference of The IEEE Engineering in Medicine and Biology Society (EMBC), Berlin, Germany, pp. 2369-2372. https://doi.org/10.1109/EMBC.2019.8857755

[16] Basri, A., Arif, M. (2021). Classification of seizure types using random forest classifier. Advances in Science and Technology. Research Journal, 15(3). http://dx.doi.org/10.12913/22998624/140542

[17] Usman, S.M., Khalid, S., Bashir, S. (2021). A deep learning based ensemble learning method for epileptic seizure prediction. Computers in Biology and Medicine, 136: 104710. https://doi.org/10.1016/j.compbiomed.2021.104710

[18] Jaiswal, A.K., Banka, H. (2017). Epileptic seizure detection in EEG signal with GModPCA and support vector machine. Bio-Medical Materials and Engineering, 28(2): 141-157. https://doi.org/10.3233/BME-171663

[19] Ashokkumar, S.R., Premkumar, M., Dhilipkumar, P., Manikandan, P., Naveen, P., Saravanan, M. (2022). Application of multi-domain feature for automated seizure detection from EEG signal. In 2022 3rd International Conference on Smart Electronics and Communication (ICOSEC), Trichy, India, pp. 280-285. https://doi.org/10.1109/ICOSEC54921.2022.9952003

[20] Selvathi, D., Meera, V.K. (2017). Realization of epileptic seizure detection in EEG signal using wavelet transform and SVM classifier. In 2017 International Conference on Signal Processing and Communication (ICSPC), Coimbatore, India, pp. 18-22. https://doi.org/10.1109/CSPC.2017.8305848

[21] Omidvar, M., Zahedi, A., Bakhshi, H. (2021). EEG signal processing for epilepsy seizure detection using 5-level Db4 discrete wavelet transform, GA-based feature selection and ANN/SVM classifiers. Journal of Ambient Intelligence and Humanized Computing, 12(11): 10395-10403. https://doi.org/10.1007/s12652-020-02837-8

[22] Wang, L., Xue, W., Li, Y., Luo, M., Huang, J., Cui, W., Huang, C. (2017). Automatic epileptic seizure detection in EEG signals using multi-domain feature extraction and nonlinear analysis. Entropy, 19(6): 222. https://doi.org/10.3390/e19060222

[23] Truong, N.D., Nguyen, A.D., Kuhlmann, L., Bonyadi, M.R., Yang, J., Ippolito, S., Kavehei, O. (2018). Convolutional neural networks for seizure prediction using intracranial and scalp electroencephalogram. Neural Networks, 105: 104-111. https://doi.org/10.1016/j.neunet.2018.04.018

[24] Tsiouris, Κ.Μ., Pezoulas, V.C., Zervakis, M., Konitsiotis, S., Koutsouris, D.D., Fotiadis, D.I. (2018). A long short-term memory deep learning network for the prediction of epileptic seizures using EEG signals. Computers in Biology and Medicine, 99: 24-37. https://doi.org/10.1016/j.compbiomed.2018.05.019

[25] Bose, S., Rama, V., Warangal, N., Rao, C.R. (2017). EEG signal analysis for Seizure detection using discrete wavelet transform and random forest. In 2017 International Conference on Computer and Applications (ICCA), Doha, pp. 369-378. https://doi.org/10.1109/COMAPP.2017.8079760

[26] Ma, Y., Huang, Z., Su, J., Shi, H., Wang, D., Jia, S., Li, W. (2023). A multi-channel feature fusion CNN-BI-LSTM epilepsy EEG classification and prediction model based on attention mechanism. IEEE Access, 11: 62855-62864. https://doi.org/10.1109/ACCESS.2023.3287927

[27] Maia, P., Lopes, E., Hartl, E., Vollmar, C., Noachtar, S., Cunha, J.P.S. (2020). Multimodal approach for epileptic seizure detection in epilepsy monitoring units. In XV Mediterranean Conference on Medical and Biological Engineering and Computing-MEDICON 2019: Proceedings of MEDICON 2019, Coimbra, Portugal. Springer International Publishing. Springer, Cham, pp. 1093-1104. https://doi.org/10.1007/978-3-030-31635-8_133

[28] Goldberger, A.L., Amaral, L.A., Glass, L., Hausdorff, J.M., Ivanov, P.C., Mark, R.G., Mietus, J.E., Moody, G.B., Peng, C.K., Stanley, H.E. (2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation, 101(23): e215-e220. https://doi.org/10.1161/01.CIR.101.23.e215