© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
The primary goal of this research is to detect and classify Alzheimer’s disease (AD) by using machine learning algorithms. The proposed method follows preprocessing, feature extraction, and classification techniques to distinguish between various AD phases. The study has demonstrated how classifiers act in recognizing and classifying different phases of Alzheimer's disease. The classifier's input consisted of the primary features of different frequency bands. Machine learning classifiers are used to assess recognition accuracy. After bands filtering and feature extraction, developing a novel model that employs K-nearest neighbors (KNN), support vector machine (SVM), and multi-layer perceptron (MLP) algorithms to test classification performance, which extracts several bands from the EEG signals, are the next steps in the study. The constructed machine learning classifiers use five wavelet band features to classify various stages of sickness. These properties are calculated with the use of wavelet-related knowledge for learning through the use of discrete wavelet transform (DWT), principal component analysis (PCA), and independent component analysis (ICA). The suggested machine learning models are tested with EEG signals, in that SVM shows an average accuracy of 95% for testing data classification.
EEG, PCA, ICA, DWT, MLP
It is already possible to determine the features of brain functioning disorders using EEG analysis [1-5], albeit it is unclear how these features link to AD. Comparing EEG to other imaging modalities, it offers non-invasive, affordable, and highly precise temporal data on electrical activity in the brain during neurotransmission. The EEG processing and analysis in the suggested framework were done by using a DWT [1] to break down the EEG signal into its frequency sub-bands and extract a collection of statistical features to reflect the distribution of wavelet coefficients. ICA [6, 7] and PCA [8-12] are two methods for reducing the dimensions of data.
After that, these characteristics were sent into a support vector machine and a multilayer perceptron, which could only yield the AD or Normal Control (NC) results. The performance of the classification process as a result of several approaches is displayed and contrasted to indicate the superiority of the procedure. These findings provide an example of how to use data from particular petit mal epileptic patients to train and test an Alzheimer's Detection prediction algorithm. These kinds of technologies will probably be required to customize intelligent epilepsy treatment devices to each individual's neurophysiology before they are placed into clinical usage, given the variety of epilepsy.
Authors [13-15] applied several machine learning approaches, such as SVM, KNN, and others, to increase the accuracy of early-stage AD identification using EEG and employing Hjorth parameters. The author [16] investigated the adaptive flexible analytic wavelet transform, which adapts to EEG variations automatically. The use of spectral, empirical wavelet transforms and wavelet-based features for early Alzheimer's disease identification utilizing EEG signals using artificial neural networks was investigated by the researchers [17-21], and an average accuracy of 90% was attained. In order to predict AD, the researchers [22-25] examined six supervised machine learning methods, and the suggested model had an average accuracy of 85%.
The paper is organized as follows: Section 2 briefly showcases the proposed model with dataset details, preprocessing steps, feature extraction methods, and classification models continuing with Section 3, which consists of result discussion and concludes the work in Section 4. The Last Section consists of references.
The proposed methodology for EEG-based AD detection is shown in Figure 1. First, the EEG brain signals are processed by separating them into multiple frequency bands, compressing them, and minimizing noise. Next, latent components are extracted, and AD is identified from the input EEG data using the feature extraction method. These features are then given to the SVM, KNN, and MLP classifiers so they can perform classification; the results from MLP are better than those from KNN and SVM. Finally, the model's performance is assessed by assessing its accuracy, sensitivity, and specificity.
Figure 1. Proposed model for AD prediction
The proposed algorithm to predict the stages of AD is given in Algorithm 1. The algorithm depicts that first the brain EEG signals are preprocessed to remove the arti facts, and then apply the ICA technique to extract the latent components from the data. With further extracted features, the model is trained. At the conclusion, evaluate the model using the test dataset.
Algorithm 1: AD disease classification using EEG signal |
Input: EEG signals Identify the input signals as the output. Start Step 1: Pre-process the EEG data. • Sort the signals that are input. • Acquire the EEG signal's subbands, Delta, Theta, Gamma, and Beta. Step 2: Give the dataset a label. Step 3: Separate the test and train datasets. Step 4: Use DWT, ICA, and PCA to extract the features. Step 5: Use the features that were extracted to train the MLP and SVM classifier models. Step 6: Acknowledge test signals and document the outcome. Finish |
2.1 Dataset
The EEG readings from Baskent University Hospital's neurology department were taken into account in this planned endeavor [4]. The demographic information of each individual is given in Table 1. The dataset consists of 50 EEG signals of each AD and CN. The mean age of AD individuals is 75, and for CN, it is 69.18. There are 15 male AD individuals and 10 female individuals [11].
Table 1. Demographic information
Demographic Characteristics |
AD |
CN |
Age (mean) |
75 |
69.18 |
Gender (m:f) |
15:10 |
18:7 |
Count |
25 |
25 |
Figure 2. Channel electrode montage
There were nineteen electrodes used to record the EEG waves. These electrodes were positioned in accordance with the global 10-20 system. Thirteen channels were referred to using a longitudinal bipolar montage, while the remaining fifteen channels were referred to using a monopolar montage. The channel electrode montage is shown in Figure 2 [5].
2.2 Dataset preprocessing
While being recorded, EEG signals are frequently distorted with various aberrations. The most common types of artifacts are cardiac, ocular, muscular, and motion. Therefore, applying a bandpass filter to remove the noise is the first step in our suggested method. Frequencies higher than 60 Hz are usually classified as noise and removed. A bandpass filter and an IIR filter are employed in preprocessing to lower the noise. Signals are handled in a frequency-dependent manner by networks known as filters. A systematic unwanted change in a signal is commonly referred to as signal distortion, however, it is also known by other names such as signal fading, reverberations, echo, multipath reflections, and missing samples. A bandpass filter is a part of an electrical circuit or component that allows signals between two designated frequencies to pass through while discriminating against signals at other frequencies. prototype analog lowpass filter's poles. When Ωc=1 rad/s is applied to a Butterworth filter of order N, the poles are as follows:
p′ak=−sin(θ)+jcos(θ) (1)
where,
θ=(2k−1)π2Nk=1:N (2)
Here we place a prime superscript on p to differentiate the lowpass prototype poles from the as-yet-uncalculated bandpass poles. Determine the lower and higher -3 dB frequencies of the digital bandpass filter, as well as the corresponding frequencies of the analog bandpass filter. We define fcenter as the center frequency in Hz and BWHz as the -3 dB bandwidth in Hz. At -3 dB, the discrete frequencies are as follows:
f1=fcenter−BWHz/2 (3)
f2=fcenter+BWHz/2 (4)
As before, we'll pre-warp the analog frequencies to take the nonlinearity of the bilinear transform into consideration:
F1=fsπtan(πf1fs) (5)
F2=fsπtan(πf2fs) (6)
Two further quantities need to be defined: BWHz=F2F1, Hz is the pre-warped -3 dB bandwidth, and F0=√F1F1 is the geometric mean of F1 and F2.
Analog bandpass poles are created by converting the analog lowpass poles. We obtain two bandpass poles for each lowpass pole pa'.
Pa=2 π [BWHz2F0p′a±j√1−(BWHzp′a2F02)] (7)
Utilize the bilinear transform to move the poles from the s-plane to the z-plane. The only difference is that there are 2N poles rather than N poles, much like with the IIR lowpass:
Pk=1+pak/2fs1−pak/2fs, k=1 to 2N (8)
N zeros at z=-1 and N zeros at z=+1 must be added. Now we can write H(z) as follows:
H(z)=K(z+1)N(z−1)N(z−p1)(z−p2)(z−p3)… (9)
The N zeros at -1 and N zeros at +1 are represented as a vector in bp_synth:
q= [-ones(1,N) ones(1,N)] (10)
Poles and zeros can be converted to generate polynomials with coefficients a and bn. If we multiply the numerator and denominator of Eq. (1) by z2N and then expand the numerator and denominator, we get polynomials in z-n.
H(z)=K(z+1b0+b1z−1+…b2Nz−2N1+a1z−1+…a2Nz−2N) (11)
A band pass filter that makes use of a Blackman window is explained in Algorithm 2. First, initialize the flag. In this case, the sampling frequency Fs, order N, and first and second cutoff frequencies are Fc1 and Fc2, respectively. In the second stage, N+1 window vectors are constructed using the Blackman approach. To obtain the coefficients, call the FIR1 function next. Lastly, reduce the infinite coefficient into discrete using dfilt.dffir() discrete-time, direct-form finite impulse response (FIR) filter.
Algorithm 2: Filter input noise signals |
Input: EEG signal Output: Filtered Signal Step 1: Initialization Fs=5250000; // Sampling Frequency N=3500; //Order Fc1=59500; //First Cutoff Frequency Fc2=60500; // Second Cutoff Frequency flag = 'scale'; //Sampling Flag Step 2: Create the window vector for the design algorithm. win=Blackman(N+1); Step 3: Calculate the coefficients using the FIR1 function. b=fir1(N, [Fc1 Fc2]/(Fs/2), 'bandpass', win, flag); Step 4: Truncate infinite coefficients into discrete Hd = dfilt.dffir(b); |
Figure 3. Input noise signal
Figure 4. Filtered signal
Figures 3 and 4 display the bandpass filter's output and the input noise signal, respectively.
2.3 Feature extraction
Depending on the frequency range, the EEG is divided into five subband signals before the features are retrieved. These signals are referred to as Gama, Beta, Alpha, Theta, and Delta. To extract characteristics from a signal on various scales, DWT is employed after executing repeated high pass and low pass filtering. Here, the EEG data are divided into six frequency sub-bands using the DWT. According to EEG research, there are five main frequency bands for EEG signals, and there is a correlation between behavior and neuronal activity in a specific region of the brain. The most frequently utilized frequency ranges are beta (14 Hz), alpha (8 Hz), theta (4 Hz), gamma (30 Hz-63 Hz), and delta (0.1 Hz or 0.5 Hz). The approximation and detail coefficients follow in order with the wavelet coefficients. It is desirable to choose a transform that yields the fewest coefficients necessary to precisely recover the original signal in applications that call for bilateral transformations. The DWT reduces the range of translation and scale change, which are typically powers of two, to attain this parsimony. When using DWT-based analysis, the majority of signal and image processing applications are best explained in terms of filter banks. Using a sequence of filters, sub-band coding divides a signal into its constituent spectral regions. Figures 5-14 display the five decomposed bands of the normal and AD signals.
Figure 5. NC GAMA
Figure 6. AD GAMA
Figure 7. NC BETA
Figure 8. AD BETA
Figure 9. NC ALPHA
Figure 10. AD ALPHA
Figure 11. NC THETA
Figure 12. AD THETA
Figure 13. NC DELTA
Figure 14. AD DELTA
Raw EEG data shows low-frequency activity, or delta activity, whereas high-frequency activity, or gamma, is essentially noise-like and has small amplitude. As mentioned in Section 2, the intensity data of the various bands demonstrate that the gamma band intensity declines, the alpha and beta band intensity decreases, and the delta and theta band intensity increase in AD patients when compared to NC. PCA is a tried-and-true method for dimensionality reduction and feature extraction. Our objective is to use PCA to represent the d-dimensional data in a lower-dimensional space. To accurately reflect the variance in terms of sum-squared error, the data must be represented in a space. As with conventional clustering techniques, it is very helpful if we are aware of the number of independent components ahead of time. The primary components method makes a lot of sense theoretically. It reduces the number of variables in a data set while keeping as much as is practical. It mostly performs the five functions listed and explained below:
covariance = 1 / (N-1) * data * data';
[PC, V] = eig(covariance);
signals = PC' *data;
The PCA outputs are shown in Figure 15.
Figure 15. Features extracted by PCA
Figure 16. Features extracted from input signal by using ICA
Figure 17. Detail and approximation coefficients of original and DWT waves respectively
Figure 18. Combined test features
A feature extraction method called ICA transforms a multivariate random signal into a signal with independently variable components. With this method, individual components can be extracted from the mixed signals. As a result, independence means that one cannot infer information from that carried by another component. This indicates that the product of the probabilities of each independent item equals the overall probability of the independent quantities in terms of statistics. The ICA findings are shown in Figure 16.
Locate the approximation and detail coefficient vectors using DWT after PCA and ICA have been found. This is accomplished by utilizing the chosen wavelet to apply the function dwt to the original signal. It's contained in the formula. [CA1,CD1]=dwt('db8', Origin_Sig); provides the single-level DWT of the vector x using the wavelet provided by wname-db1. The dwt command returns the detail coefficients vector CD1 and approximation coefficients vector CA1 of the DWT. Figure 17 shows the detail coefficient and approximation coefficients for the DWT signal and the original signal.
Test features are ultimately formed by combining the PCA features, independent coefficient, and DWT detail coefficient. Figure 18 shows the combined test features.
An independent variable component signal is produced from a multivariate random signal using the feature extraction technique known as ICA. The independent components of the mixed signals can be separated using this technique. In light of this, independence denotes the inability to deduce information carried by one component from that carried by another. This means that the total probability of the independent quantities is the product of the probabilities of each independent quantity, as far as statistics are concerned. fastICA was utilized in this investigation. Aapo Hyvärinen of Helsinki University of Technology created this popular and successful ICA algorithm. Algorithm 3 provides an explanation of how fastICA operates.
Algorithm 3: fastICA algorithm |
Step 1: Initialization W = normRows(rand(r,size(Z,1))); % Random initial weights k = 0; delta = inf; Step 2: While delta > TOL && k < MAX_ITERS k = k + 1; % Update weights Wlast = W; % Save last weights Sk = permute(W * Zcw,[1, 3, 2]); if USE_KURTOSIS % Kurtosis--- is a measure of the tailedness of a distribution. Tailedness is how often outliers occur. G = 4 * Sk.3; Gp = 12 * Sk.2; else % Negentropy G = Sk .* exp(-0.5 * Sk.2); Gp = (1 - Sk.2) .* exp(-0.5 * Sk.2); end end Step 3: Return fastICA component % Independent components fastICA = mean(unique(Wlast(:))); |
2.4 Classification model
As many authors have succeeded in their classification experiment by using the machine learning models like SVM, KNN and MLP [9, 10] here also chosen these models for binary classification. The SVM is a discriminative classifier defined by a separating hyperplane. A two-dimensional plane is divided into two sections by a hyperplane, with one class on each side. There are many advantages to using a support vector classifier. Standard tools can be used to improve its properties in order to identify a single global optimum. A little bit more work is needed when using nonlinear bounds. Additionally, in comparison to other approaches, its performance is really good. One disadvantage is the inverse correlation between the number of samples and the difficulty of the problem rather than the size of the samples.
Figure 19. MLP layer summary
MLP: Because of binary classification, the MLP consists of an input layer with 50 neurons, two hidden layers with 256 neurons, and an output layer with 2 neurons. The input signals are additionally classified as AD or normal using the model. The leakyrelu activation function activates the neurons in the hidden layer, whereas the Softmax activation function activates the neurons in the output layer. Adam optimizes the MLP model with a cross entropy loss function. Figure 19 provides an overview of the layers.
KNN: One of the simplest and most popular classification algorithms is the KNN method, which ranks new data points according to how similar they are to their closest neighbors. There is competition as a result. The algorithms determine the distances between a given data point and each of the K nearby datapoints before choosing the category with the highest frequency for that particular data point. The commonly used unit of measurement for distance is the Euclidean distance. Thus, the final model consists just of the annotated data arranged spatially. Numerous industries, including genetics and forecasting, use this technique. In this case, the method outperforms SVM when there are more features. Additionally, KNN does better in our proposed.
We completed the recommended task with Mat lab R2015b. MATLAB is an interactive environment and high-level technical computing language used for constructing algorithms, analyzing, visualizing, and executing numerical calculations. Technical computer problems can be solved more quickly with MATLAB than with more conventional programming languages like C, C++, and Fortran. This study employed the EEG values from two distinct classes: AD and normal. With a 1:5 ratio, the dataset is divided into training and test datasets. The training dataset is used to train the SVM. MLP and KNN models. The MLP with 40 iterations and a learning rate of 0.00050. After the networks have been trained using test features using the Adam optimizer, the network weight is adjusted using the category cross-entropy function. The parameters that were considered for this experiment are listed in Table 2.
Table 2. Parameters
Parameters |
MLP Model |
Optimizer |
Adam |
Activation function |
Leakyrelu and Softmax |
Loss function |
Categorical cross entropy |
Batch size |
128 |
Dataset |
EEG |
Epoch |
40 |
Learning rate |
0.00050 |
Normalization |
Batch normalization |
Pooling |
Maxpooling |
For classification, the SVM and KNN are also taken into account. The training set of this model is cross-validated ten times. Ten subsets are chosen at random from the training set, which is made up of the label set and data collection. The remaining part 9 samples are utilized as input in the training sample set process, and the remaining 1 in 10 holds are randomly selected as a test to assess the use of the sample set.
The effectiveness of the system is evaluated using the confusion matrix. The confusion matrices for SVM, KNN, and MLP can be seen in Tables 3-5. The KNN was able to accurately anticipate each test EEG signal with 100% accuracy rate by giving true positive and false positive rate of ten each respectively.
Table 3. KNN confusion matrix
Classes |
AD |
NC |
Accuracy |
Sensitivity |
Specificity |
AD |
10 |
0 |
100% |
100% |
100% |
NC |
0 |
10 |
100% |
100% |
100% |
Average |
100% |
100% |
100% |
Table 4. SVM confusion matrix
Classes |
AD |
NC |
Accuracy |
Sensitivity |
Specificity |
AD |
7 |
3 |
75% |
70% |
80% |
NC |
2 |
8 |
75% |
80% |
70% |
Average |
75% |
75% |
75% |
Table 5. MLP confusion matrix
Classes |
AD |
NC |
Accuracy |
Sensitivity |
Specificity |
AD |
8 |
2 |
85% |
80% |
90% |
NC |
1 |
9 |
85% |
90% |
80% |
Average |
85% |
80% |
80% |
The SVM performs poorly, with an accuracy of 75%, where among ten AD test data 7 are displayed as AD and 3 are classified as NC, that is it gives false negative rate (FNR) of 5.
The MLP gives an average accuracy of 85 percent, in that 2 ADs are predicted as FNR, and 1 NC is predicted as AD. The confusion matrix of the SVM displays the frequency of missed test feature predictions. For both sensitivity and specificity, the model gives 80 percent.
Figure 20 displays the MLP, SVM, and KNN performance for proposed method using EEG signals. Based on the comparison, KNN performs optimally, offering 100% accuracy, sensitivity, and specificity. As can be seen from the comparison graph, the SVM performs the worst, earning only 45% of the possible points for specificity, sensitivity, and accuracy. The performance of existing models and the proposed model is given in Table 6. As compared to existing models, the proposed model shows better performance by giving an accuracy of 100%, 75%, and 85% for KNN, SVM, and MLP respectively.
Figure 20. Performance comparison graph
Table 6. The performance comparison
References |
Features |
Classifier |
Classes |
Subject Numbers |
Accuracy |
[6] |
Wavelet features |
SVM |
NC vs. AD |
50, 50 |
96% |
[7] |
Wavelet features |
SVM |
NC vs. AD |
50, 50 |
92% |
[8] |
Spectral and complex features |
KNN |
NC vs. AD |
50, 50 |
94% |
Proposed method |
Complex features-PCA, ICA, and DWT |
KNN SVM MLP |
NC vs. AD |
35, 31 |
100% 75% 85% |
Since AD requires a different kind of therapy, doctors treating suspected epilepsy can benefit from a useful diagnostic decision-support tool. Individuals are classified by KNN as either having an AD or not. Utilizing statistical characteristics taken from the DWT sub-bands of the EEG data, the accuracy of two feature extraction methods—PCA and ICA—was compared with SVM, KNN, and CNN in order to determine how well they understood the observed AD/NC patterns. Two scalar performance measures—specificity and sensitivity and accuracy—that were obtained from the confusion matrices served as the basis for the comparisons. Our research indicates that KNN, in conjunction with nonlinear feature extraction, may one day successfully replace intelligent diagnosis technologies. For this experiment, an Intel Pentium MATLAB R2015b with a 64-bit operating system and 8 GB of RAM was utilized.
[1] Umbricht, G.F., Tarzia, D.A., Rubio, D. (2022). Determination of two homogeneous materials in a bar with solid-solid interface. Mathematical Modelling of Engineering Problems, 9(3): 568-576. https://doi.org/10.18280/mmep.090302
[2] Al-Qerem, A., Kharbat, F., Nashwan, S., Ashraf, S., Blaou, K. (2020). General model for best feature extraction of EEG using discrete wavelet transform wavelet family and differential evolution. International Journal of Distributed Sensor Networks, 16(3): 1550147720911009. https://doi.org/10.1177/1550147720911009
[3] Buzzell, G.A., Niu, Y., Aviyente, S., Bernat, E. (2022). A practical introduction to EEG Time-Frequency Principal Components Analysis (TF-PCA). Developmental Cognitive Neuroscience, 55: 101114. https://doi.org/10.1016/j.dcn.2022.101114
[4] Placidi, G., Cinque, L., Polsinelli, M. (2021). A fast and scalable framework for automated artifact recognition from EEG signals represented in scalp topographies of independent components. Computers in Biology and Medicine, 132: 104347. https://doi.org/10.1016/j.compbiomed.2021.104347
[5] https://ars.els-cdn.com/content/image/1-s2.0-S1746809420303530-gr1_lrg.jpg.
[6] Nagarathna, C.R., Kusuma, M.M. (2023), Early detection of Alzheimer’s Disease using MRI images and deep learning techniques. Alzheimer's & Dementia, 19(53): e062076. https://doi.org/10.1002/alz.062076
[7] Kulkarni, N.N., Bairagi, V.K. (2017). Extracting salient features for EEG-based diagnosis of Alzheimer's disease using support vector machine classifier. IETE Journal of Research, 63(1): 11-22. https://doi.org/10.1080/03772063.2016.1241164
[8] Rangegowda, N.C., Mohanchandra, K., Preetham, A., Almas, M., Huliyappa, H. (2023). A multi-layer perceptron network-based model for classifying stages of Alzheimer's Disease using clinical data. Revue d'Intelligence Artificielle, 37(3): 601-609. https://doi.org/10.18280/ria.370309
[9] Kulkarni, N. (2018). Use of complexity based features in diagnosis of mild Alzheimer disease using EEG signals. International Journal of Information Technology, 10(1): 59-64. https://doi.org/10.1007/s41870-017-0057-0
[10] Nagarathna, C.R., Mohanchandra, K. (2023). An overview of early detection of Alzheimer's disease. International Journal of Medical Engineering and Informatics, 15(5): 442-457. https://doi.org/10.1504/IJMEI.2023.133091
[11] Nagarathna, C.R, Kusuma, M. (2021). Comparative study of detection and classification of Alzheimer's disease using Hybrid model and CNN. In 2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON), Bengaluru, India, pp. 43-46, https://doi.org/10.1109/CENTCON52345.2021.9688082
[12] Safi, M.S., Safi, S.M.M. (2021). Early detection of Alzheimer’s disease from EEG signals using Hjorth parameters. Biomedical Signal Processing and Control, 65: 102338. https://doi.org/10.1016/j.bspc.2020.102338
[13] Oltu, B., Akşahin, M.F., Kibaroğlu, S. (2021). A novel electroencephalography based approach for Alzheimer’s disease and mild cognitive impairment detection. Biomedical Signal Processing and Control, 63: 102223. https://doi.org/10.1016/j.bspc.2020.102223
[14] Pirrone, D., Weitschek, E., Di Paolo, P., De Salvo, S., De Cola, M.C. (2022). EEG signal processing and supervised machine learning to early diagnose Alzheimer’s disease. Applied Sciences, 12(11): 5413. https://doi.org/10.3390/app12115413
[15] Al-Jumeily, D., Iram, S., Vialatte, F.B., Fergus, P., Hussain, A. (2015). A novel method of early diagnosis of Alzheimer’s disease based on EEG signals. The Scientific World Journal, 2015(1): 931387. https://doi.org/10.1155/2015/931387
[16] Khare, S.K., Acharya, U.R. (2023). Adazd-Net: Automated adaptive and explainable Alzheimer’s disease detection system using EEG signals. Knowledge-Based Systems, 278: 110858. https://doi.org/10.1016/j.knosys.2023.110858
[17] Tsolaki, A., Kazis, D., Kompatsiaris, I., Kosmidou, V., Tsolaki, M. (2014). Electroencephalogram and Alzheimer’s disease: Clinical and research approaches. International Journal of Alzheimer’s Disease, 2014(1): 349249. https://doi.org/10.1155/2014/349249
[18] Bairagi, V. (2018). EEG signal analysis for early diagnosis of Alzheimer disease using spectral and wavelet based features. International Journal of Information Technology, 10(3): 403-412. https://doi.org/10.1007/s41870-018-0165-5
[19] Rodrigues, P.M., Bispo, B.C., Garrett, C., Alves, D., Teixeira, J.P., Freitas, D. (2021). Lacsogram: A new EEG tool to diagnose Alzheimer's disease. IEEE Journal of Biomedical and Health Informatics, 25(9): 3384-3395. https://doi.org/10.1109/JBHI.2021.3069789
[20] Amezquita-Sanchez, J.P., Mammone, N., Morabito, F. C., Marino, S., Adeli, H. (2019). A novel methodology for automated differential diagnosis of mild cognitive impairment and the Alzheimer’s disease using EEG signals. Journal of Neuroscience Methods, 322: 88-95. https://doi.org/10.1016/j.jneumeth.2019.04.013
[21] Miltiadous, A., Tzimourta, K.D., Giannakeas, N., Tsipouras, M.G., Afrantou, T., Ioannidis, P., Tzallas, A. T. (2021). Alzheimer’s disease and frontotemporal dementia: A robust classification method of EEG signals and a comparison of validation methods. Diagnostics, 11(8): 1437. https://doi.org/10.3390/diagnostics11081437
[22] Kulkarni, N.N., Bairagi, V.K. (2017). Extracting salient features for EEG-based diagnosis of Alzheimer's disease using support vector machine classifier. IETE Journal of Research, 63(1): 11-22.
[23] Nour, M., Senturk, U., Polat, K. (2024). A novel hybrid model in the diagnosis and classification of Alzheimer's disease using EEG signals: Deep ensemble learning (DEL) approach. Biomedical Signal Processing and Control, 89: 105751. https://doi.org/10.1016/j.bspc.2023.105751
[24] Fiscon, G., Weitschek, E., De Cola, M.C., Felici, G., Bertolazzi, P. (2018). An integrated approach based on EEG signals processing combined with supervised methods to classify Alzheimer’s disease patients. In 2018 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Madrid, Spain, pp. 2750-2752. https://doi.org/10.1109/BIBM.2018.8621473
[25] Cassani, R., Estarellas, M., San-Martin, R., Fraga, F.J., Falk, T.H. (2018). Systematic review on resting-state EEG for Alzheimer’s disease diagnosis and progression assessment. Disease Markers, 2018(1): 5174815. https://doi.org/10.1155/2018/5174815