© 2024 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
With the use of Brain-Computer Interface (BCI) technologies, the brain and the outside world can communicate directly, bypassing the peripheral nervous system. This concept is fascinating as it acknowledges that the cells in our brain are electrical signals generated by neurons, which are the brain's information-processing units. The techniques for processing these electrical signals are crucial for mapping this electrical activity to develop reliable brain-computer interfaces. Electroencephalography (EEG) stands out as one of the most commonly utilized Brain-Computer Interface (BCI) techniques, primarily due to its ease of use and non-invasive characteristics. The capacity of a BCI system to interpret patterns of cognitive activity through computational algorithms to manipulate external devices is a key aspect of this technology. In the present study, an examination is conducted on the potential for researchers engaged in the analysis of EEG signals originating from the brain, encompassing methodologies reliant on multi-channel EEG data as well as diverse physiological signals. The focus extends to applications developed since 2018 and subsequent years, delving into details such as the nature of the data employed, specifications of the equipment utilized for capturing electrical signals for control purposes, the number of electrodes deployed, the volume of participants involved in data generation essential for cutting-edge BCI applications, techniques for obtaining EEG features and the optimal accuracy achievement levels in the said applications. Overall, BCI technology is a promising field with a vast range of applications. As technology advances, we can expect to see more sophisticated and reliable brain-computer interfaces that can be applied to enhance the lives of those who are disabled and neurological disorders.
electroencephalography, brain-computer interface, discrete signal processing, neural mechanisms
One of the essential principles underlying human civilization is interaction and communication. This foundational aspect facilitates the expression of emotions, ideas, and innovative thoughts. Human communication is rendered more fluid and less constrained, whether it is conveyed through vocalization, gestures, or written text. The aforementioned avenues for engagement are absent for individuals who experience a sense of closure. The principal etiological factors contributing to locked-in syndrome encompass multiple sclerosis, amyotrophic lateral sclerosis (ALS), cerebral palsy, brain stem stroke, and spinal cord injury [1, 2]. Although individuals afflicted with locked-in syndrome possess acute awareness of their environment, they are rendered incapable of communication or social interaction with others [3]. An individual suffering from locked-in syndrome encounters substantial challenges in establishing connections with others; consequently, numerous research endeavors within the domain of human-computer interaction (HCI) concentrate regarding brain-machine interfaces (BCIs). BCIs have been employed, for instance, to monitor activity [4, 5], engage with software and gaming applications [6], and directly manipulate the movement of physical objects [7]. The integration of BCIs with supplementary sensors, such as eye-tracking [8] and gyroscopes [9], has the potential to enhance BCI efficacy. This integration can augment the user's degrees of freedom (for instance, the user may select an item utilizing eye-tracking while simultaneously issuing a command through BCIs). There are many of effective EEG-based BCI applications available, including wheelchair controllers [10] and word speller programs [11]. Moreover, BCIs can be utilized not only for the mental control of devices but also for the interpretation of our mental states [12]. The oscillatory nature of electrical potentials in the brain, resulting from the ionic current flow among neurons, is captured by an electroencephalogram (EEG). EEG data is acquired through the measurement of electrical activity at electrode sites on the scalp. The 10-20 electrode placement method [12-14], illustrated in Figure 1, provides a standardized system to ensure consistent reproducibility. When employed in real-world applications, the BCI encounters multiple challenges, including:
Figure 1. The 10–20 system of electrode placement [12]
1-Data throughput Rate (Bandwidth): BCI applications face limitations in response time and control precision due to low data bandwidth.
2-Low BCI signal strength: Brain signals typically exhibit low intensity, complicating their extraction and necessitating signal amplification.
3-High error rate: The weak signal and slow data throughput contribute significantly to the elevated error rate, compounded by considerable fluctuations in brain signals.
4-Unreliable signal characterization: Electrodes capture signals from specific brain regions, yet inaccurate classification and interference hinder effective signal categorization.
Therefore, the objectives of this article are a comprehensive overview of brain-computer interfaces for studies in the last years, which delve into various aspects including the characteristics of the data utilized, the specifications of the apparatus employed for capturing electrical signals intended for control purposes, the quantity of electrodes implemented, the number of participants engaged in data generation pivotal for avant-garde BCI applications, and the levels of optimal accuracy attained in these applications following the methodologies adopted therein for electroencephalography (EEG), produced by the brain, thereby enabling the establishment and integration of our findings with the complex and enigmatic functionalities of the brain.
The organization of this survey is as follows: Section 2 discovering new information on various brain signals. Section 3 demonstrates the types of EEG signals and how they serve a purpose in BCI. As for section 4, the neural mechanisms underlying smart BCIs based on EEG machine learning (ML) and deep learning techniques are explained, as well as an explanation of the Paradigms into which they are subdivided. What most BCI models contain is described in section 5 then the applications with the greatest popularity are listed in section 6. In addition to creating a table that displays many details with some studies carried out by researchers within the various applications in this filed. Finally, conclusions and an overview of a few problems and potential solutions are presented in section 7.
Brain signals can be detected and evaluated using a variety of imaging methods, including magnetoencephalography (MEG), functional magnetic resonance imaging (fMRI), functional near-infrared (FNIR) imaging, and positron emission tomography (PET). It is currently not practical to use MEG, fMRI, or PET on a daily basis because to their high cost, extensive technological requirements, and their absence of real-time capabilities [15, 16]. According to the experts, Only FNIR and electrical field monitoring are anticipated to have immediate application in clinical settings. A method for capturing electrical activity in the brain, known as electrocorticography (ECOG) [17], includes recording spike trains and local field potentials (LFPs) on the scalp, the cortex, and the interior of the brain. There are advantages and disadvantages to take into account for each technique (see Figure 2). Strong topographical resolution is provided by Local Field Potential (LFP) methods like ECOG, which may work across a wide frequency range. Direct brain control of external devices has been demonstrated to be highly promising via brain-computer interfaces (BCIs) [16]. Such as the capacity to reestablish self-feeding according to law [18-20], while using invasive signal methods to record inter cortical neural activity in monkeys. However, they are invasive and need electrodes are inserted on or inside the cortex to induce effects. The main issues with invasive BCIs that need to be resolved before they may be applied in therapeutic contexts are as follows: long-term security, signal durability, and signal stability, on the other hand, electromyography (EMG) and cerebral muscle electrooculography (EOG) activity can occasionally contaminate electroencephalography (EEG) recordings [21-26].
Figure 2. Gives a hierarchal classification of brain-machine interfaces [27]
The development of EEG-based brain-computer interfaces is significantly hampered by the significantly lower signal-to-noise (s/n) ratio of non-invasive techniques compared to invasive methods. Time-locked trials are averaged with regard to the stimulus. repeated averaging, which may be utilized to create Event-Related Potentials (ERPs), is a common technique for enhancing the s/n ratio [22]. Users may be trained to control their brain activity, such as by modulating alternatively, the 8–12 Hz sensorimotor Mu rhythm or slow Cortical Potentials (SCPs) can enhance the s/n ratio for reliable BCI control. As people get better at managing their brain activity, the s/n ratio will rise. It is anticipated that the fluctuation in a person's EEG signal would diminish after they learn to properly control their brain activity [24]. Short-term training can be helpful for SCPs or sensorimotor Mu rhythms, nevertheless, due to the frequent need for long-term training because spontaneous EEG activity is unpredictable [25]. Most BCIs use electroencephalography (EEG) as the primary approach to generate BCI control signals because of its simplicity, non-invasiveness, and high temporal resolution, portability, and low cost [26]. In addition to the fact that invasive BCIs require major surgery, and have a worse signal-to-noise ratio than non-invasive BCIs, it is still unknown whether they are suitable for long-term use due to brain tissue interactions. On the other hand, electroencephalography (EEG) signal-based non-invasive BCIs are easier to set up and do not require surgery [28-32].
The way neural activity is produced by the brain in large quantities. There are numerous signals that BCI can utilize. Spikes and field potentials are two different kinds of these signals [14, 28]. Spikes are recorded using invasively implanted microelectrodes and represent the action potentials of specific neurons. Field potentials, which may be detected by EEG or electrodes implanted in the body, are a gauge of neurons combined synaptic, neuronal, and axonal activity.
EEG signals are classified based on their frequency bands [29]. As illustrated in Figure 3.
Figure 3. 5 Major frequency ranges of brain waves [21]
• Delta signals range from 0.5 to 3.5 Hz, typically exhibiting the highest amplitude and slow movement, common in newborns and adults during slow-wave sleep.
• Theta signals, ranging from 3.5 to 7.5 Hz, are associated with daydreaming and inefficiency, marking the transition between wakefulness and sleep, with high levels in adults deemed abnormal.
• Alpha signals operate between 7.5 and 12 Hz, initially identified by Hans Berger as "alpha waves," predominantly observed in the posterior regions of the head, with increased power noted post-marijuana use.
• Beta signals, with frequencies from 12 to about 30 Hz, exhibit symmetrical distribution and are most pronounced anteriorly, often categorized into types 1 and 2; increased activity is observed during focused tasks or inhibition.
• Gamma signals are characterized by frequencies of 31 Hz and above, reflecting cognitive awareness.
Researchers have created clinical uses, and it has been determined that EEG is the gold standard test for detecting and diagnosing epilepsy, stroke, and a host of other trauma-related conditions. EEGs have been used in non-clinical situations for BCI-based games, motor imaging tasks (e.g., thinking about moving the left or right hand, foot, or tongue), and passive BCI, in which the EEG is analyzed but not used to control any devices [31]. Classifying various EEG tasks or scenarios is among the primary objectives of an EEG-based BCI.
Commonly used in intelligent systems are machine learning (ML) techniques [33]. To automate the process of creating analytical models and to complete or augment related operations, machine learning (ML) refers to a system that can learn from training data from specific activities [34]. Artificial neural networks (ANNs) are the foundation of the deep learning (DL) paradigm, a branch of machine learning [35]. According to Al Faiz and Al-Hamadani [36], ML algorithms frequently concentrate on categorizing EEG data connected to the motor and fictitious motions of hands and feet to execute control operations. Because DL is successful in sectors with vast and high-dimensional data, it outperforms ML methods for the majority of text, image, video, voice, and audio processing approaches [37]. Even said, ML algorithms may still produce superior outcomes for low-dimensional data input, particularly in the absence of training data. As their output is even more interpretable than that of deep neural networks [38, 39].
The BCIs would be classified as "evoked" when external stimulation is required and as "spontaneous" when it is not, based on whether external stimulation is necessary for the BCI to function or not. And have observed that some authors have also referred to the classification of evoked and spontaneous systems as exogenous and endogenous [18].
The present focus of several research institutions is cantered on endogenous EEG-based brain-computer interfaces (BCIs) that are utilized to decode movement intention, as evidenced by the scientific literature [26]. These BCIs work by altering the EEG's sensorimotor rhythms, which are captured across the scalp throughout the sensory motor brain area. employing motor imagery paradigms [40-42]. Through these methods, the EEG can provide valuable insight into the cognitive processes underlying motor intention. Despite the benefits of endogenous BCIs for motor-related activities, they often require a lengthy training time to create conscious control over the brain's sensory impulses [43]. Additionally, they demonstrate mediocre multiclass decoding [44] and restricted information transfer rate (ITR) [45] performances. These flaws, in addition to very significant inter individual variability, may prevent those systems from being used outside of a controlled laboratory setting.
Exogenous BCIs work using brain signals called steady-state evoked potentials, are additionally known as Event-Related Potentials (ERPs), which can be triggered through visual, auditory, or somatosensory stimuli [46]. These signals are different from endogenous BCIs. The most popular exogenous BCI paradigms consist of those that use visually evoked potentials (VEPs). Visual stimuli, such as led that flash quickly and repeatedly in front of the person, cause VEPs to be generated. These potentials are relatively simple to manipulate and quantify, and they strongly rely on the nature and characteristics of the visual stimuli [47].
A multitude of investigations have elucidated an extensive array of neural signals that may function as control signals in BCI systems. Signals in structures that use brain-computer interfaces (BCI). However, solely those as control signals in BCI systems. Signals utilized in contemporary BCI systems will be examined in the subsequent discussion.
4.1 Oscillatory EEG activity
Neuronal feedback loops in a complicated network are what induce oscillatory EEG activity. Observable oscillations are produced by the firing of the neurons in these feedback loops in sync. The Rolandic mu-rhythm, which occurs in the frequency between 10 and 12 Hz, as well as the core beta rhythm, which occurs in the frequency range of 14–18 Hz, are the two different oscillations of interest. This action is an example of "idling" or rest [29].
4.2 Event-Related Potentials
Time-locked brain reactions known as Event-Related Potentials (ERPs) happen immediately after a particular internal or external event. These potentials become evident when they are subjected to sensory, mental, or the lack of constantly occurring stimuli. Exogenous components of the ERP form as a result of processing an external event, although they are unrelated to the function of stimuli in information processing. Endogenous ERP components, on the other hand, emerge at an internal processing event. It depends on the task that the stimulus was used for and how the stimulus and its environment interacted [48]. The following categories apply to the ERP events.
4.2.1 Event-related synchronization and desynchronization
Event-related synchronization (ERS) and desynchronization (ERD) are two different characteristics of a specific form of ERP. Power declines in particular frequency ranges when neuronal synchrony declines. The signal amplitude reduction that characterizes this occurrence as an ERD may be seen. An increase in power in certain frequency bands is caused by an increase in the synchronization of neurons and/or the loudness of the signal, which is the hallmark of ERS. Table 1 illustrates both the Event-related synchronization and desynchronization of each of the two methods to Event-related synchronization and desynchronization.
Table 1. A comparison of the two BCI methods currently in use
|
Synchronous BCIs |
Asynchronous BCIs |
Advantages |
Controlling user artifacts is simpler because the user can move or blink at predetermined time windows. A simpler design (the system anticipates when the user's instruction will be received). |
can be used at the user's discretion |
Disadvantages |
The system imposes commands; the user is unable to choose when to carry them out. |
prone to user-generated artifacts (such as eye blinks and movements) computationally more difficult since it offers continuous real-time classification |
4.2.2 Visual evoked potential
The visual-evoked potential (VEP), an element of the electroencephalogram that happens in reaction to visual input, is another form of ERF frequently utilized in BCI. Because
VEPs depend on the user's ability to direct their gaze, consistent muscle control is necessary [49]. P300 is an ERP element that is triggered in the course of reaching a choice. The P300 is supposed to represent mechanisms involved in categorization or sensory assessment. The oddball paradigm, which combines high-probability non-target items with low-probability target items, is typically used to elicit it [50]. The user is given a job that must be divided into both categories to be completed. A P300 component, or large positive wave, appears around 300 milliseconds after the event begins, and is produced when a rare event is exhibited [16].
4.2.3 Slow cortical potential
Changes in some dendrites' levels of depolarization result in changes in the sluggish cortical potential, of which this is a segment. Positive SCP denotes the elimination of synchronized potentials from the dendrites, whereas negative SCP relates to the total quantity of synchronized potentials.
4.2.4 Neuronal potential
A voltage spike produced by a single neuron is called a neuronal potential. The potential of a neuron or a group of neurons may be measured. The signal is a representation of the temporal pattern, correlation, and average rate of neural firing. Neurons in the cortical regions linked to the task's average firing rate can alter over time, which can be used to quantify learning [51].
Signal capture, information preprocessing, feature extraction, and classification are the components included in the majority of BCI models [33, 52, 53]. Electrodes positioned on the scalp's surface are used to acquire signals, and analog signals are collected through these electrodes [54] before being converted to digital form using analog-to-digital converters. The signals are then subjected to preprocessing, which involves eliminating noise from the electrical line, brain noise, and different artifacts caused by the use of muscles, such as those in the face and eyes, from the data [55]. Due to its effect on the effectiveness of the classification algorithms, feature extraction is one of the key processes. Some of the features that were acquired, such as mean, median, variance, maximum, and minimum, are in the time and frequency domains [56] using many strategies in signal processing like Common spatial patterning (CSP) [57-93], power spectral density (PSD) [70], wavelet transforms [67], and other for feature extraction approaches like utilizing statistical measures [23]. A vector comprising the EEG signals' most important properties is created during the feature extraction process. This vector serves as the data that categorization systems use as input. The next step is classification, which is carried out with the use of several different algorithms, such as ANN, D.T., SVM, KNN, and LDA [61]. Various scientific, engineering, and research sectors currently assess and employ BCIs to create applications that offer solutions to challenging issues. The three major processes for creating a BCI system are as follows [14, 48].
Figure 4. Components included in the majority of BCI models [40]
As seen in Figure 4 the three steps are data manipulation in step three, signal processing in step two, and signal collection in step one.
Step 1: Signal Gathering. The brain's electrical impulses must be captured via a signal acquisition procedure. The scalp, the brain's surface, or the activity of the neurons might all provide electrical signals that could be recorded. The capture signals must be amplified because their intensity is often modest. They must then be converted to digital form to be utilized by applications running on computers.
Step 2: Processing of Signals. The signals that were acquired in Step 1 are examined in this phase to produce the control signals. Other suboperations that might be used for signal processing include the following:
• Preprocessing
In electroencephalogram (EEG) signal examination, preprocessing constitutes an indispensable phase intended to eliminate noise and extraneous artifacts from the raw signals, thereby augmenting the integrity of the information for further examination. EEG readings are intrinsically noisy due to interference from external sources such as electrical power lines, in addition to biological artifacts generated by muscular movements and ocular blinks [55].
Artifact Removal: Prevalent artifacts encompass ocular motion, muscular contractions, and electrical interference from external apparatuses. Methodologies Principal Component Analysis (PCA) and Independent Component Analysis (ICA) are two examples of utilized for isolate and eradicate these undesirable components from the data. Ocular movement artifacts, for example, can be identified by their distinctive low-frequency oscillations, typically residing in the delta or theta frequency range.
Filtering: Band-pass filters are employed to remove frequencies that lie outside the desired spectrum. For instance, EEG data often necessitate a high-pass filter to eliminate gradual drifts (e.g., below 0.5 Hz) and a low-pass filter to eradicate high-frequency noise (e.g., above 50 Hz). The determination of suitable cut-off frequencies is contingent upon the specific type of EEG signals under examination. A prevalent strategy involves the application of a band-pass filter to retain frequencies ranging from 0.5 to 50 Hz, as this spectrum typically encompasses the most pertinent cerebral activity for EEG investigations.
Normalization: Subsequent to filtering, EEG data are frequently normalized to standardize the amplitude across disparate channels or subjects. This procedure aids in mitigating the variability engendered by disparities in scalp conductivity or electrode positioning. Z-score normalization or min-max scaling may be implemented to ensure that all channels contribute equivalently during ensuing processing phases.
Epoching and Segmentation: Depending on the nature of the investigation, the continuous EEG signals may be partitioned into epochs, typically time-locked to particular events (e.g., stimuli or motor commands). These epochs facilitate a concentrated analysis of cerebral responses to specific tasks or stimuli, thereby enabling the extraction of features pertinent to Event-Related Potentials (ERPs) or other task-relevant neural dynamics.
• Feature extraction
Feature extraction distills the most relevant information from EEG signals, converting raw time-domain data into features for machine learning algorithms. Key methods include:
Time-Domain Features: Basic statistics like mean, variance, skewness, and kurtosis capture the signal’s overall behavior. These features are useful for detecting significant changes in the EEG signals, such as those caused by motor imagery or task engagement.
Frequency-Domain Features: Using techniques like Fast Fourier Transform (FFT) or Power Spectral Density (PSD), EEG signals are broken down into frequency bands (e.g., delta, theta, alpha, beta, gamma). The power in each band is extracted as a feature, commonly applied in tasks such as classifying mental states (e.g., alertness vs. relaxation) and motor imagery [56].
Instantaneous Frequency (IF): Unlike PSD, IF provides a time-varying representation of frequency content, enabling the detection of quick transitions between cognitive states. This method is particularly useful in tasks that require continuous monitoring of brain activity, such as task switching.
Spatial Features: Methods like Common Spatial Patterns (CSP) help improve class separability in multi-channel EEG data. CSP identifies spatial filters that maximize the variance between different classes [93, 94], such as left- and right-hand motor imagery, making it highly effective for classification tasks.
Wavelet Transform: The wavelet transform allows for multi-resolution analysis of EEG signals [93], capturing both time and frequency domain information. This is particularly valuable for tasks involving non-stationary signals, such as seizure detection or cognitive workload monitoring.
These feature extraction methods significantly enhance data quality, enabling more precise and effective analysis for brain-computer interface (BCI) systems.
• Signal Translation algorithm classification
The following process, known as the translation algorithm, transforms the signal properties that have been obtained into device commands and orders that achieve the user's objective. The classification algorithm may utilize linear or nonlinear approaches to categorize the signals based on their frequency and form.
Step 3: Data Manipulation. The output is adjusted to fit the output platforms (like a computer screen). Once the signals have been classified.
Applications. Today, where and how can we employ BCIs.
1. Connection. One of the first uses of BCIs was yes/no communication, often known as yes/no communication. The "Right Justified Box" technique, which entailed employing motor imagery to select between two objectives, is a well-known illustration of this [62].
2. Typing. The now-oldest BCI application and one of the ones that is currently most often utilized is typing. The "Farwell-Donchin Matrix" [63] is one of the methods that has garnered the most interest. To evaluate the P300 evoked response, a matrix of alphabetical letters and other symbols is flashed in a random sequence (Figure 5).
Figure 5. A matrix of alphabetical letters and other symbols is flashed in a random sequence in the BCI application [5]
3. Web surfing. Several research teams have proposed controlling the complete system instead of just the web browser. For example, Moore et al. [64] employed muscle imagery in "The Brain Browser" to choose the commands "next" and "previous".
4. Manipulating. Applications that are used to directly influence real-world or virtual objects—such as propelling a wheelchair ahead or choosing an item in a video game—by changing their pace or sending commands to turn left—fall under this category. For instance, authentic robot piloting work. From the study by LaFleur et al. [65]. The following mental actions can be used to control an actual robot drone: raising it by visualizing the movement of both hands; lowering it by visualizing the movement of both feet; and so on. Another illustration is the control of a virtual dwelling. The study [66] Provided a method of operating a virtual apartment where the many options for orders and activities were displayed on a screen, the "Farwell-Donchin Matrix" was used, in which the borders of the pictures were flashed to elicit the P300 evoked response.
5. Computers that help users. Computers with a personal touch. Shenoy and Tan [67] used this phrase in order to characterize systems that employ the outcomes of the implicit processing that humans already do in their decision-making (for instance, when a person notices a candle the brain instantly detects and classifies the candle only by passively perceiving it, even though it doesn't require any additional specific mental work connected to this job). Despite the current perception that machine learning techniques are fairly complex, the human brain is still superior at tasks like identifying the data in the environment. As a consequence, we can help the pattern recognition systems that are already in use recognize and categorize the pictures of other stimuli rapidly and effectively.
6. Utilizations for creativity. Miranda et al. [68] presented a method that creates music using the EEG signals' prominent frequencies. The output of the currently identified dominant frequency has an impact on the music engine's output.
7. Software that relates to health. There are several uses for BCIs since they were first suggested as a remedy for people with handicaps. Therapy for coma monitoring (cognitive function detection), Attention Deficit Hyperactivity Disorder (ADHD), rehabilitation and prosthetics, including stroke recovery treatments, and ADHD therapy are some of the uses.
8. Applications for cognitive state monitoring. Apps for keeping an eye on cognitive health. Examples include any potentially life-or-death activities that demand a high degree of human focus, such as air traffic control, as well as applications for improving user experience, such as altering the layout of a webpage if the system thinks that the user is overworked. The music and light switches in the apartment may serve as an illustration of a person's physical environment.
They are perceived as being tired. The following five examples:
A. Using the reading engagement app from the study [72] a movie connected to the current text to draw the user's attention to what he or she is reading when the user becomes bored with the content (as judged by a BCI).
B. Afergan et al. [73] identified intervals of boredom or overload so that the work may be adjusted to the user as needed. Participants in the experiment had to arrange the flight paths for several unmanned aerial vehicles (UAVs) in a simulation. The scientists observed that by varying the task's complexity based on the participants' mental states and adding or subtracting UAVs, it was possible to reduce errors by 35%.
C. Alerting mechanisms. The Phylter system by Afergan [74] employed the user's cognitive state and the information that was supplied by the user to decide whether or not to deliver the notification message based on the message's stated priority and prediction about the user's incorruptibility.
D. Practice of meditation. Eskandari and Erfanian [75] suggested conducting research with two groups of subjects: one practicing meditation and the other serving as the control group. The patients who were meditating displayed an ERD of beta rhythm while they were at rest. The control group did not have this ERD.
E. BCIs as a means of gaining access to UX. Frey et al. [76] suggested using EEG-based BCIs to access users' mental effort, attentiveness, and identification of interface failures as an evaluation tool during HCI trials.
Table 2. Outlines methods and computations for creating dependable EEG-driven brain-computer interfaces for multiple uses
Ref. |
No. of Participate |
EEG Signal Related Information (Device Platform) |
No. of Electrodes |
BCI Control Paradigm |
No. of Classes |
Application Contents |
Used Dataset |
Methods for EEG Features |
Classification Algorithms and Results |
[33] |
30 randomly selected subjects |
BCI2000 system, LabVIEW 2015: Biomedical toolkit and signal express |
64 electrodes |
motor Imaginary |
Four classes and a rest class |
Identification of Motor Movements as a left fist, right fist, fist, feet and relaxing |
Nervous system disorders laboratory and is publicly available on Physio net |
Amplitude, frequency, phase, and statistical measures like mean, variance, and kurtosis |
The Medium-ANN model gave the highest average score of 0.9998 |
[77] |
10 |
Biosemi / OpenViBE |
32 electrodes |
P300 |
Thirty-six tourism destinations are chosen and divided into six continents |
Virtual world tour |
MOHW-designated Public Institutional bioethics committee |
Significant features by a least square method |
stepwise linear discriminant analysis average accuracy was 96.6% |
[78] |
109 |
PhysioBank and PhysioToolkit software |
64 electrodes |
motor imagery |
Two classes and a rest class |
Left, right-hand movement and rest |
Research resource for complex physiologic signals: Physio net |
Significant by spectrograms features |
CNN-based model: 93% accuracy |
[79] |
9 |
Ag/Ag Cl electrodes (from the dataset from the source) |
22 electrodes |
motor imageries |
Two classes |
Left and right hand |
BCI competition IV held in 2008 by GRAZ University of Technology in Austria |
Wavelet domain features |
Support Vector Machine (SVM) maximum accuracy of 80% and average accuracy of 76.24% |
[80] |
10 |
Brain Product GmbH, Ag/Ag Cl electrodes |
20 (compare with another signal type (EMG)) |
motor imagery |
3 and rest class |
Grasp actions (Cylindrical (Cup) Spherical (Ball) Lateral (Card) |
EEG data were collected at Korea University |
Common Spatial Pattern (CSP) features |
Linear discriminant analysis (LDA) 63.89_7.54% for actual movement and 46.96_15.30% for motor imagery |
[81] |
9 |
Brain Vision/Recorder Brain Product GmbH, Germany with active Ag/AgCl electrodes |
64 electrodes |
visual imagery |
Six-class |
Reflecting the user intention from the visual scene (‘ambulance’, ‘clock’, ‘light’, ‘toilet’, ‘TV’, and ‘water’). |
Data were collected by authors with approval by the Institutional review board at Korea University. |
Common spatial pattern (CSP) features |
24.2 % for regularized linear discriminant analysis (RLDA) |
[82] |
18 |
g.tec medical engineering GmbH, Austria |
8 electrodes |
P300 |
Two classes |
Robotic hand for motor rehabilitation. |
datasets generated and analyzed for this study |
Common spatial pattern (CSP) features |
78.7 (target), 85.7 for the linear discriminant analysis regularized version (RLDA)) |
[83] |
26 |
2 g.tec USBAmp amplifiers, OpenViBE |
36 electrodes |
visual imagery |
Two classes and a rest class |
Two pre-established pictures (a hammer or a flower) |
Datasets was generated during the study |
Common spatial pattern (CSP) features |
71% for visual imagery vs. visual observation task 61% for one observation cue versus another observation cue 77% for resting vs. observation/imagery, for Spectrally Weighted Common Spatial Patterns (SpecCSP) |
[84] |
1 |
Brain Products GmbH, Gilching, Germany, dry electrode |
32 electrodes |
visual imagery |
Two classes on for three-class |
Discriminate between visual imagery of a face, scene, or resting state |
Data were collected by other authors |
Power spectrum features |
binary classification accuracy (59.9%, p < 0.05) for linear SVM |
[92] |
10 |
Neuracle, China/ Psychophysics Toolbox |
9 electrodes |
SSVEP |
Four classes |
Robotic arm control |
study datasets were approved by the Research Ethics Committee of the Chinese Academy of Medical Sciences |
Spectrum and signal-to-noise ratio (SNR) features |
97.75% For FBCCA |
[85] |
6 |
BrainProducts GmbH, Germany, Ag/AgCl electrodes |
64 electrodes |
visual imagery |
Four classes |
Control the swarm drone flight as ‘Hovering’, ‘Splitting’, ‘Dispersing’, and ‘Aggregating’ |
According to the Helsinki Declaration, data were gathered by study authors at The Korean University |
Common spatial pattern (CSP) features |
was 83% is the highest accuracy for Linear Discriminant analysis (LDA) |
[86] |
7 |
BrainProduct GmbH, Germany, Ag/AgCl electrodes |
64 electrodes |
imagined speech and visual imagery |
Twelve –classes and rest class |
Decoding of user intention from imagined speech and visual imagery for Twelve words/phrases (ambulance, clock, hello, help me, light, pain, stop, thank you, toilet, TV, water, and yes) |
According to the Helsinki Declaration, data were gathered by study authors at The Korean University |
Statistical analysis features |
34.2 % for thirteen-class classification accuracy (imagined speech) for Random Forest (RF). and 26.7 % for thirteen-class classification accuracy (visual Imagery) for Random Forest (RF) |
[87] |
32 |
An Emotiv EPOC headset |
14 electrodes |
motor imagery |
Two-class |
Envisioning body kinematics (IBK) to provide cursor movement that is natural |
University of Tennessee dataset |
Mean values of power spectral density across the Theta, Alpha, Beta, and Gamma frequency bands |
80% for A Random Forest Classifier |
[88] |
38 |
Brainvision actiCHamp amplifier EASYCAP |
64 electrodes |
perception and visual imagery |
Twelve class |
Classification of Perception and visual imagination the number of objects : Apple, Car, Carrot, Chicken, Hand, Eye, Sheep, Butterfly, Rose, Ear, Chair, and Violin |
OSFHOME |
Spatial features |
93% accuracy for visual perception versus Rest, and 28% for all the 12 visual perception classes |
[89] |
4 |
___ |
128 electrodes |
perception and an imagination task |
40-class |
Distinguish real images and classify the image category |
ImageNet dataset |
Entropy loss and mean squared error |
96% best classification accuracy by “Mix”. generative adversarial network (GAN) |
[93] |
2 |
Brain Amp MR plus amplifiers and Ag/AgCl electrode |
59 electrodes |
motor imagery |
Two classes |
Classification of left- and right-hand imagery movement |
BCI Competition IV dataset |
Wavelet packet decomposition and grey wolf algorithm |
92.86% for subjects “a" and 91.53% for subjects “b" |
[91] |
21 |
BioSemi ActiveTwo system using damp electrodes of Ag/AgCl |
32 electrodes |
perception and visual imagery |
Three classes |
Classification of object, digit, and shape classes |
The data are collected in the School of Electrical Engineering and Computer Science, Korea |
Time series, time–frequency maps, and CSP format |
63.62% for visual perception and 71.38% accuracy in visual imagery for Multi Rocket network |
These examples show the wide range of BCI-using systems: the various applications, such as controlling a drone in a physical environment or altering the interface to accommodate the users' level of workload the many ways to influence the system (imagining a movement vs abstaining from executing a specific action); the various BCI platforms (actual robots versus virtual surroundings); and the numerous ways to exercise. Table 2 shows many details with some research done by researchers in the different applications created since 2018 and later years. These studies delve into topics like the type of data used, the specifications of the devices used to capture electrical signals for control purposes, the number of electrodes used, the number of participants in data generation necessary for advanced BCI applications, techniques for obtaining EEG features, and the most effective accuracy achievement levels in the aforementioned applications.
Significant progress in EEG signal processing techniques was shown in this study, especially with the use of CSP for feature extraction and LDA for classification. Higher classification accuracy and quicker system reactions were the results of these advancements, which set the stage for more dependable and useful BCI applications.
This study's improved real-time response time and classification accuracy can directly improve BCI-driven assistive technologies, like communication devices for people with locked-in syndrome. Users can interact more confidently and efficiently with their environment by lowering error rates and increasing signal clarity. BCIs that have been refined using the techniques described in this work have the potential to be extremely important rehabilitation technologies, especially for stroke recovery. BCIs can enhance motor function and promote neuroplasticity by giving patients control over external devices like robotic limbs or virtual rehabilitation exercises through controlled motor imagery tasks. This study shows that training time reductions can improve BCI systems' usability and make them more accessible to non-expert users. This could have a big effect on how widely used BCI technologies are in clinical and home environments, where usability is crucial.
This study's exploration of BCI signal processing advances paves the way for cross-functional applications like virtual reality experiences and smart home control systems. Users may obtain smooth control over their surroundings by combining EEG-based BCIs with gyroscopes or eye tracking devices, enabling them to do everything from web browsing to home automation.
Even though the current study's results are encouraging, more research should concentrate on extending the use of EEG data to more difficult motor imaging techniques to strengthen the resilience of BCIs in practical settings. Furthermore, adding cloud-based signal processing could lower latency even more and boost BCI systems' responsiveness.
In the end, this study's findings aid in the continued development of interactions between the brain and computer as useful instruments to enhance the freedom and standard of living for people with neurological disorders or motor impairments. As these technologies continue to evolve, their potential applications in assistive devices, rehabilitation, and daily interaction systems will only expand.
Here are some suggestions into cutting-edge areas of BCI technology:
· Hybrid BCI Systems: These combine EEG with other signals like EMG and eye-tracking to improve accuracy and functionality, particularly in applications such as wheelchair control.
· AI-Powered Adaptive BCIs: Incorporating machine learning allows BCIs to adapt to individual users' cognitive and physiological changes over time, enhancing personalization and effectiveness, especially for long-term users like ALS patients.
· BCI for Cognitive State Monitoring and Mental Health: BCIs can be used to monitor mental health and cognitive workload, detecting issues like stress or cognitive decline in real-time and providing feedback to help users manage their emotional states.
· Cloud-Based BCI Processing: This approach offloads complex processing to the cloud, making BCI devices lighter and more portable, thereby increasing accessibility and maintaining performance in real-time application.
In particular, for people with disabilities, Brain-Computer Interface (BCI) technology continues to hold great promise for facilitating communication between the brain and external devices. By enhancing signal processing methods like Linear Discriminant Analysis (LDA) and Common Spatial Patterns (CSP), this work has improved EEG-based BCIs in terms of responsiveness and classification accuracy. These developments pave the way for more useful and dependable BCI applications, especially in assistive technologies like mobility aids and communication aids.
Although the results show promise, issues like signal fluctuation and inter-user variability still exist. By addressing these issues and customizing BCIs for each user, adaptive learning models may greatly enhance practical application and usability. Furthermore, combining EEG with other physiological measurements, such as eye tracking or EMG, may result in hybrid BCI systems that enhance precision and functionality.
Future-oriented, cloud-based processing integrated into BCI architectures could lead to lighter, more portable devices; additionally, AI-driven adaptive BCIs could improve efficacy and personalization even more. Furthermore, by offering real-time feedback and fatigue or stress intervention, interfaces between brains and computers (BCIs) may proliferate in prevalent in monitoring cognitive states and mental health.
In summary, although this study's advances move us closer to real-world BCI systems, there is still much room for future research to overcome existing constraints and broaden the scope of use. As BCI technology develops further, it will surely help people with disabilities live better lives and provide creative solutions in both every day and clinical settings.
[1] Holz, E.M., Höhne, J., Staiger-Sälzer, P., Tangermann, M., Kübler, A. (2013). Brain–computer interface controlled gaming: Evaluation of usability by severely motor restricted end-users. Artificial intelligence in medicine, 59(2): 111-120. https://doi.org/10.1016/j.artmed.2013.08.001
[2] Lee, J.C., Tan, D.S. (2006). Using a low-cost electroencephalograph for task classification in HCI research. In Proceedings of the 19th Annual ACM Symposium on User Interface Software and Technology, Montreux Switzerland, pp. 81-90. https://doi.org/10.1145/1166253.1166268
[3] Ashok, S. (2017). High-level hands-free control of wheelchair–a review. Journal of Medical Engineering & Technology, 41(1): 46-64. https://doi.org/10.1080/03091902.2016.1210685
[4] Khaleel, A.H., Ali, T.H., Ibrahim, A.W.S. (2023). Enhancing human-computer interaction: a comprehensive analysis of assistive virtual keyboard technologies. Ingénierie des Systèmes d’Information, 28(6): 1709-1717. https://doi.org/10.18280/isi.280616
[5] Kosmyna, N., Lécuyer, A. (2019). A conceptual space for EEG-based brain-computer interfaces. PloS One, 14(1): e0210145. https://doi.org/10.1371/journal.pone.0210145
[6] Kosmyna, N., Tarpin-Bernard, F., Rivet, B. (2015). Conceptual priming for in-game BCI training. ACM Transactions on Computer-Human Interaction (TOCHI), 22(5): 1-25. https://doi.org/10.1145/2808228
[7] Kosmyna, N., Tarpin-Bernard, F., Rivet, B. (2015). Towards brain computer interfaces for recreational activities: Piloting a drone. In Human-Computer Interaction–INTERACT 2015: 15th IFIP TC 13 International Conference, Bamberg, Germany, pp. 506-522. https://doi.org/10.1007/978-3-319-22701-6_37
[8] Lee, E.C., Woo, J.C., Kim, J.H., Whang, M., Park, K.R. (2010). A brain–computer interface method combined with eye tracking for 3D interaction. Journal of Neuroscience Methods, 190(2): 289-298. https://doi.org/10.1016/j.jneumeth.2010.05.008
[9] Mercep, L., Spiegelberg, G., Knoll, A. (2013). Reducing the impact of vibration-caused artifacts in a brain-computer interface using gyroscope data. In Eurocon 2013, Zagreb, Croatia, pp. 1753-1756. https://doi.org/10.1109/eurocon.2013.6625214
[10] Naveen, R.S., Julian, A. (2013). Brain computing interface for wheel chair control. In 2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT), Tiruchengode, India, pp. 1-5. https://doi.org/10.1109/icccnt.2013.6726572
[11] Akram, F., Metwally, M.K., Han, H.S., Jeon, H.J., Kim, T.S. (2013). A novel P300-based BCI system for words typing. In 2013 International Winter Workshop on Brain-Computer Interface (BCI), pp. 24-25. https://doi.org/10.1109/iww-bci.2013.6506617
[12] Jirayucharoensak, S., Pan-Ngum, S., Israsena, P. (2014). EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation. The Scientific World Journal, 2014(1): 627892. https://doi.org/10.1155/2014/627892
[13] Nomenclature, S.E.P. (1991). American electroencephalographic society guidelines for standard electrode position nomenclature. Journal of Clinical Neurophysiology, 8(2): 200-202.
[14] Ramadan, R.A., Refat, S., Elshahed, M.A., Ali, R.A. (2015). Basics of brain computer interface. Brain-Computer Interfaces: Current Trends and Applications, 31-50. https://doi.org/10.1007/978-3-319-10978-7_2
[15] Rashid, M., Sulaiman, N., Majeed, A.P.P.A., Musa, R.M., Nasir, A.F.A., Bari, B.S., Khatun, S. (2020). Current status, challenges, and possible solutions of eeg-based brain-computer interface: A comprehensive review. Frontiers in Neurorobotics, 14: 25. https://doi.org/10.3389/fnbot.2020.00025
[16] Nagwanshi, K.K., Noonia, A., Tiwari, S., Doohan, N.V., Kumawat, V., Ahanger, T.A., Amoatey, E.T. (2022). Wearable sensors with Internet of Things (IoT) and vocabulary-based acoustic signal processing for monitoring children’s health. Computational Intelligence and Neuroscience, 2022(1): 9737511. https://doi.org/10.1155/2022/9737511
[17] Wilson, H.R., Cowan, J.D. (1973). A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Kybernetik, 13(2): 55-80. https://doi.org/10.1007/bf00288786
[18] Värbu, K., Muhammad, N., Muhammad, Y. (2022). Past, present, and future of EEG-based BCI applications. Sensors, 22(9): 3331. https://doi.org/10.3390/s22093331
[19] Karameldeen Omer, Francesco Ferracuti, and Alessandro Freddi (2023). Real-time mobile robot obstacles detection and avoidance through EEG signals. Proceedings of the 10th International Brain-Computer Interface Meeting 2023. https://doi.org/10.3217/978-3-85125-962-9-1
[20] Kumar Shukla, P., Kumar Shukla, P., Sharma, P., Rawat, P., Samar, J., Moriwal, R., Kaur, M. (2020). Efficient prediction of drug–drug interaction using deep learning models. IET Systems Biology, 14(4): 211-216. https://doi.org/10.1049/iet-syb.2019.0116
[21] Wang, X., Chen, Y., Ding, M. (2007). Testing for statistical significance in Bispectra: A surrogate data approach and application to neuroscience. IEEE Transactions on Biomedical Engineering, 54(11): 1974-1982. https://doi.org/10.1109/tbme.2007.895751
[22] Kumar, A., Saini, M., Gupta, N., Sinwar, D., Singh, D., Kaur, M., Lee, H.N. (2022). Efficient stochastic model for operational availability optimization of cooling tower using metaheuristic algorithms. IEEE Access, 10: 24659-24677. https://doi.org/10.1109/access.2022.3143541
[23] Khadam, Z.M., Abdulhameed, A.A., Hammad, A. (2024). Enhancing meditation techniques and insights using feature analysis of electroencephalography (EEG). Al-Mustansiriyah Journal of Science, 35(1): 66-77. https://doi.org/10.23851/mjs.v35i1.1457
[24] Khambra, G., Shukla, P. (2023). Novel machine learning applications on fly ash based concrete: An overview. Materials Today: Proceedings, 80: 3411-3417. https://doi.org/10.1016/j.matpr.2021.07.262
[25] Kim, J., Jiang, X., Forenzo, D., Liu, Y., Anderson, N., Greco, C.M., He, B. (2022). Immediate effects of short-term meditation on sensorimotor rhythm-based brain–computer interface performance. Frontiers in Human Neuroscience, 16: 1019279. https://doi.org/10.3389/fnhum.2022.1019279
[26] Gutierrez-Martinez, J., Mercado-Gutierrez, J.A., Carvajal-Gamez, B.E., Rosas-Trigueros, J.L., Contreras-Martinez, A.E. (2021). Artificial intelligence algorithms in visual evoked potential-based brain-computer interfaces for motor rehabilitation applications: Systematic review and future directions. Frontiers in Human Neuroscience, 15: 772837. https://doi.org/10.3389/fnhum.2021.772837
[27] Salahuddin, U., Gao, P.X. (2021). Signal generation, acquisition, and processing in brain machine interfaces: A unified review. Frontiers in Neuroscience, 15: 728178. https://doi.org/10.3389/fnins.2021.728178
[28] Wolpaw, J.R. (2003). Brain-computer interfaces: Signals, methods, and goals. In First International IEEE EMBS Conference on Neural Engineering, 2003. Conference Proceedings. Capri, Italy, pp. 584-585. https://doi.org/10.1109/cne.2003.1196894
[29] Yin, S., Dokos, S., Lovell, N.H. (2013). Bidomain modeling of neural tissue. Neural Engineering, 389-404. https://doi.org/10.1007/978-1-4614-5227-0_8
[30] Lebedev, M.A., Nicolelis, M.A. (2006). Brain–machine interfaces: Past, present and future. TRENDS in Neurosciences, 29(9): 536-546. https://doi.org/10.1016/j.tins.2006.07.004
[31] Qu, X. (2022). Time Continuity Voting for Electroencephalography (EEG) Classification. Doctoral dissertation, Brandeis University.
[32] Ofner, P., Schwarz, A., Pereira, J., Müller-Putz, G.R. (2017). Upper limb movements can be decoded from the time-domain of low-frequency EEG. PloS one, 12(8): e0182578. https://doi.org/10.1371/journal.pone.0182578
[33] Ramírez-Arias, F.J., García-Guerrero, E.E., Tlelo-Cuautle, E., Colores-Vargas, J.M., García-Canseco, E., López-Bonilla, O.R., Inzunza-González, E. (2022). Evaluation of machine learning algorithms for classification of EEG signals. Technologies, 10(4): 79. https://doi.org/10.3390/technologies10040079
[34] Fong-Mata, M.B., García-Guerrero, E.E., Mejía-Medina, D.A., López-Bonilla, O.R., Villarreal-Gómez, L.J., Zamora-Arellano, F., Inzunza-González, E. (2020). An artificial neural network approach and a data augmentation algorithm to systematize the diagnosis of deep-vein thrombosis by using wells’ criteria. Electronics, 9(11): 1810. https://doi.org/10.3390/electronics9111810
[35] Janiesch, C., Zschech, P., Heinrich, K. (2021). Machine learning and deep learning. Electronic Markets, 31(3): 685-695. https://doi.org/10.1007/s12525-021-00475-2
[36] Al Faiz, M.Z., Al-Hamadani, A.A. (2019). Online brain computer interface based five classes EEG to control humanoid robotic hand. In 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, pp. 406-410. https://doi.org/10.1109/tsp.2019.8769072
[37] LeCun, Y., Bengio, Y., Hinton, G. (2015). Deep learning. Nature, 521(7553): 436-444. https://doi.org/10.1038/nature14539
[38] Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5): 206-215. https://doi.org/10.1038/s42256-019-0048-x
[39] Zhang, Y., Ling, C. (2018). A strategy to apply machine learning to small datasets in materials science. NPJ Computational Materials, 4(1): 25. https://doi.org/10.1038/s41524-018-0081-z
[40] Vaid, S., Singh, P., Kaur, C. (2015). EEG signal analysis for BCI interface: A review. In 2015 Fifth International Conference on Advanced Computing & Communication Technologies, Haryana, India, pp. 143-147. https://doi.org/10.1109/acct.2015.72
[41] Aggarwal, S., Chugh, N. (2019). Signal processing techniques for motor imagery brain computer interface: A review. Array, 1: 100003. https://doi.org/10.1016/j.array.2019.100003
[42] Baniqued, P.D.E., Stanyer, E.C., Awais, M., Alazmani, A., Jackson, A.E., Mon-Williams, M.A., Holt, R.J. (2021). Brain–computer interface robotics for hand rehabilitation after stroke: A systematic review. Journal of Neuroengineering and Rehabilitation, 18: 1-25. https://doi.org/10.1186/s12984-021-00820-8
[43] Padfield, N., Camilleri, K., Camilleri, T., Fabri, S., Bugeja, M. (2022). A comprehensive review of endogenous EEG-based BCIs for dynamic device control. Sensors, 22(15): 5802. https://doi.org/10.3390/s22155802
[44] Wang, T., Du, S., Dong, E. (2021). A novel method to reduce the motor imagery BCI illiteracy. Medical & Biological Engineering & Computing, 59: 2205-2217. https://doi.org/10.1007/s11517-021-02449-0
[45] Choi, J., Kim, K.T., Jeong, J.H., Kim, L., Lee, S.J., Kim, H. (2020). Developing a motor imagery-based real-time asynchronous hybrid BCI controller for a lower-limb exoskeleton. Sensors, 20(24): 7309. https://doi.org/10.3390/s20247309
[46] Wang, Y., Gao, X., Hong, B., Jia, C., Gao, S. (2008). Brain-computer interfaces based on visual evoked potentials. IEEE Engineering in Medicine and Biology Magazine, 27(5): 64-71. https://doi.org/10.1109/memb.2008.923958
[47] Kübler, A., Kotchoubey, B., Kaiser, J., Wolpaw, J.R., Birbaumer, N. (2001). Brain–computer communication: Unlocking the locked in. Psychological Bulletin, 127(3): 358-375. https://doi.org/10.1037/0033-2909.127.3.358
[48] Hassan, T.A. (2013). TheGlottal ModulationComponents ForSpeakerVoiceRecognition. Al-Mustansiriyah Journal of Science, 24(5): 519-532.
[49] Middendorf, M., McMillan, G., Calhoun, G., Jones, K.S. (2000). Brain-computer interfaces based on the steady-state visual-evoked response. IEEE Transactions on Rehabilitation Engineering, 8(2): 211-214. https://doi.org/10.1109/86.847819
[50] Bonaci, T., Chizeck, H.J. (2013). Privacy by design in brain-computer interfaces. Technical Report Number UWEETR-2013-0001.
[51] Rao, T.K., Lakshmi, M.R., Prasad, T.V. (2012). An exploration on brain computer interface and its recent trends. arXiv preprint arXiv:1211.2737. https://doi.org/10.48550/ARXIV.1211.2737
[52] Abdulkader, S.N., Atia, A., Mostafa, M.S.M. (2015). Brain computer interfacing: Applications and challenges. Egyptian Informatics Journal, 16(2): 213-230. https://doi.org/10.1016/j.eij.2015.06.002
[53] Brunner, C., Birbaumer, N., Blankertz, B., Guger, C., Kubler, A., Mattia, D., Millan, J.D.R., Miralles, F., Nijholt, A., Opisso, E., Ramsey, N., Salomon, P., Muller-Putz, G.R. (2015). BNCI Horizon 2020: Towards a roadmap for the BCI community. Brain-Computer Interfaces, 2(1): 1-10. https://doi.org/10.1080/2326263x.2015.1008956
[54] Jurcak, V., Tsuzuki, D., Dan, I. (2007). 10/20, 10/10, and 10/5 systems revisited: Their validity as relative head-surface-based positioning systems. Neuroimage, 34(4): 1600-1611. https://doi.org/10.1016/j.neuroimage.2006.09.024
[55] Peng, H., Hu, B., Qi, Y., Zhao, Q., Ratcliffe, M. (2011, May). An improved EEG de-noising approach in electroencephalogram (EEG) for home care. In 2011 5th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) and Workshops, Dublin, Ireland, pp. 469-474. https://doi.org/10.4108/icst.pervasivehealth.2011.246021
[56] Khalid, S., Khalil, T., Nasreen, S. (2014). A survey of feature selection and feature extraction techniques in machine learning. In 2014 Science and Information Conference, London, UK, pp. 372-378. https://doi.org/10.1109/sai.2014.6918213
[57] Stancin, I., Cifrek, M., Jovic, A. (2021). A review of EEG signal features and their application in driver drowsiness detection systems. Sensors, 21(11): 3786. https://doi.org/10.3390/s21113786
[58] Subasi, A., Gursoy, M.I. (2010). EEG signal classification using PCA, ICA, LDA and support vector machines. Expert Systems with Applications, 37(12): 8659-8666. https://doi.org/10.1016/j.eswa.2010.06.065
[59] Yazdani, A., Ebrahimi, T., Hoffmann, U. (2009). Classification of EEG signals using Dempster Shafer theory and a k-nearest neighbor classifier. In 2009 4th International IEEE/EMBS Conference on Neural Engineering, Antalya, Turkey, pp. 327-330. https://doi.org/10.1109/ner.2009.5109299
[60] Edla, D.R., Mangalorekar, K., Dhavalikar, G., Dodia, S. (2018). Classification of EEG data for human mental state analysis using Random Forest Classifier. Procedia Computer Science, 132: 1523-1532. https://doi.org/10.1016/j.procs.2018.05.116
[61] Saragih, A.S., Pamungkas, A., Zain, B.Y., Ahmed, W. (2020). Electroencephalogram (EEG) signal classification using artificial neural network to control electric artificial hand movement. In IOP Conference Series: Materials Science and Engineering, 938(1): 012005. https://doi.org/10.1088/1757-899x/938/1/012005
[62] Vaughan, T.M., McFarland, D.J., Schalk, G., Sarnacki, W.A., Robinson, L., Wolpaw, J.R. (2001). EEG-based brain–computer interface: Development of a speller. In Soc. Neurosci. Abstr, 27(1): 167.
[63] Farwell, L.A., Donchin, E. (1988). Talking off the top of your head: Toward a mental prosthesis utilizing event-related brain potentials. Electroencephalography and clinical Neurophysiology, 70(6): 510-523. https://doi.org/10.1016/0013-4694(88)90149-6
[64] Moore, M.T., Ope, Y., Yadav, A. (2004). The BrainBrowser, a brain-computer interface for internet navigation. Society for Neuroscience, San Diego, CA.
[65] LaFleur, K., Cassady, K., Doud, A., Shades, K., Rogin, E., He, B. (2013). Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. Journal of neural Engineering, 10(4): 046003. https://doi.org/10.1088/1741-2560/10/4/046003
[66] Carabalona, R., Grossi, F., Tessadri, A., Caracciolo, A., Castiglioni, P., De Munari, I. (2010). Home smart home: Brain-computer interface control for real smart home environments. In Proceedings of the 4th International Convention on Rehabilitation Engineering & Assistive Technology, 51.
[67] Shenoy, P., Tan, D.S. (2008). Human-aided computing: Utilizing implicit human processing to classify images. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Florence Italy, pp. 845-854. https://doi.org/10.1145/1357054.1357188
[68] Miranda, E.R., Brouse, A., Boskamp, B., Mullaney, H. (2005). Plymouth brain-computer music interface project: Intelligent assistive technology for music-making. In ICMC.
[69] Birbaumer, N., Hinterberger, T., Kubler, A., Neumann, N. (2003). The thought-translation device (TTD): Neurobehavioral mechanisms and clinical outcome. IEEE transactions on Neural Systems and rehabilitation engineering, 11(2): 120-123. https://doi.org/10.1109/tnsre.2003.814439
[70] Llorella, F.R., Patow, G., Azorín, J.M. (2020). Convolutional neural networks and genetic algorithm for visual imagery classification. Physical and Engineering Sciences in Medicine, 43(3): 973-983. https://doi.org/10.1007/s13246-020-00894-z
[71] Pfurtscheller, J., Rupp, R., Müller, G.R., Fabsits, E., Korisek, G., Gerner, H.J., Pfurtscheller, G. (2005). Funktionelle Elektrostimulation anstatt Operation? Der Unfallchirurg, 108(7): 587-590. https://doi.org/10.1007/s00113-004-0876-x
[72] Andujar, M., Gilbert, J.E. (2013). Let's learn!: Enhancing user's engagement levels through passive brain-computer interfaces. In CHI'13 Extended Abstracts on Human Factors in Computing Systems, Paris France, pp. 703-708. https://doi.org/10.1145/2468356.2468480
[73] Afergan, D., Peck, E.M., Solovey, E.T., Jenkins, A., Hincks, S.W., Brown, E.T., Jacob, R.J. (2014). Dynamic difficulty using brain metrics of workload. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto Ontario Canada, pp. 3797-3806. https://doi.org/10.1145/2556288.2557230
[74] Afergan, D. (2014). Using brain-computer interfaces for implicit input. In Adjunct Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, Honolulu Hawaii USA, pp. 13-16. https://doi.org/10.1145/2658779.2661166
[75] Eskandari, P., Erfanian, A. (2008). Improving the performance of brain-computer interface through meditation practicing. In 2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, pp. 662-665. https://doi.org/10.1109/iembs.2008.4649239
[76] Frey, J., Daniel, M., Castet, J., Hachet, M., Lotte, F. (2016). Framework for electroencephalography-based evaluation of user experience. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose California USA, pp. 2283-2294. https://doi.org/10.1145/2858036.2858525
[77] Woo, S., Lee, J., Kim, H., Chun, S., Lee, D., Gwon, D., Ahn, M. (2021). An open source-based BCI application for virtual world tour and its usability evaluation. Frontiers in Human Neuroscience, 15: 647839. https://doi.org/10.3389/fnhum.2021.647839
[78] Lomelin-Ibarra, V.A., Gutierrez-Rodriguez, A.E., Cantoral-Ceballos, J.A. (2022). Motor imagery analysis from extensive EEG data representations using convolutional neural networks. Sensors, 22(16): 6093. https://doi.org/10.3390/s22166093
[79] Ghaemi, A., Rashedi, E., Pourrahimi, A.M., Kamandar, M., Rahdari, F. (2017). Automatic channel selection in EEG signals for classification of left or right hand movement in Brain Computer Interfaces using improved binary gravitation search algorithm. Biomedical Signal Processing and Control, 33: 109-118. https://doi.org/10.1016/j.bspc.2016.11.018
[80] Cho, J.H., Jeong, J.R., Kim, D.J., Lee, S.W. (2020). A novel approach to classify natural grasp actions by estimating muscle activity patterns from EEG signals. In 2020 8th international winter conference on brain-computer interface (BCI), Gangwon, Korea (South), pp. 1-4. https://doi.org/10.1109/bci48061.2020.9061627
[81] Lee, S.H., Lee, M., Lee, S.W. (2020). Spatio-temporal dynamics of visual imagery for intuitive brain-computer interface. In 2020 8th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Korea (South), pp. 1-5. https://doi.org/10.1109/bci48061.2020.9061638
[82] Delijorge, J., Mendoza-Montoya, O., Gordillo, J.L., Caraza, R., Martinez, H.R., Antelis, J.M. (2020). Evaluation of a p300-based brain-machine interface for a robotic hand-orthosis control. Frontiers in Neuroscience, 14: 589659. https://doi.org/10.3389/fnins.2020.589659
[83] Kosmyna, N., Lindgren, J.T., Lécuyer, A. (2018). Attending to visual stimuli versus performing visual imagery as a control strategy for EEG-based brain-computer interfaces. Scientific Reports, 8(1): 13222. https://doi.org/10.1038/s41598-018-31472-9
[84] Kilmarx, J., Gamper, H., Emmanouilidou, D., Johnston, D., Cutrell, E., Wilson, A., Tashev, I. (2022). Investigating visual imagery as a BCI control strategy: A pilot study. In 2022 10th International Winter Conference on Brain-Computer Interface (BCI), Gangwon-do, Korea, pp. 1-6. https://doi.org/10.1109/bci53720.2022.9734919
[85] Kim, S.J., Kwon, B.H., Jeong, J.H. (2021). Intuitive visual imagery decoding for drone swarm formation control from EEG signals. In 2021 9th International Winter Conference on Brain-Computer Interface (BCI), Gangwon, Korea (South), pp. 1-6. https://doi.org/10.1109/bci51272.2021.9385303
[86] Lee, S.H., Lee, M., Jeong, J.H., Lee, S.W. (2019). Towards an EEG-based intuitive BCI communication system using imagined speech and visual imagery. In 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC), Bari, Italy, pp. 4409-4414. https://doi.org/10.1109/smc.2019.8914645
[87] Borhani, S., Kilmarx, J., Saffo, D., Ng, L., Abiri, R., Zhao, X. (2019). Optimizing prediction model for a noninvasive brain–computer interface platform using channel selection, classification, and regression. IEEE Journal of Biomedical and Health Informatics, 23(6): 2475-2482. https://doi.org/10.1109/jbhi.2019.2892379
[88] Llorella, F.R., Azorín, J.M., Patow, G. (2021). Black Hole algorithm with convolutional neural networks for the creation of a Brain-Switch using visual perception. In 2021 IEEE 34th International Symposium on Computer-Based Medical Systems (CBMS), Aveiro, Portugal, pp. 44-49. https://doi.org/10.1109/cbms52027.2021.00015
[89] Shimizu, H., Srinivasan, R. (2022). Improving classification and reconstruction of imagined images from EEG signals. Plos one, 17(9): e0274847. https://doi.org/10.1371/journal.pone.0274847
[90] Bobrov, P., Frolov, A., Cantor, C., Fedulova, I., Bakhnyan, M., Zhavoronkov, A. (2011). Brain-computer interface based on generation of visual images. PloS One, 6(6): e20674. https://doi.org/10.1371/journal.pone.0020674
[91] Lee, S., Jang, S., Jun, S.C. (2022). Exploring the ability to classify visual perception and visual imagery EEG data: Toward an Intuitive BCI System. Electronics, 11(17): 2706. https://doi.org/10.3390/electronics11172706
[92] Chen, X., Zhao, B., Wang, Y., Gao, X. (2019). Combination of high-frequency SSVEP-based BCI and computer vision for controlling a robotic arm. Journal of Neural Engineering, 16(2): 026012. https://doi.org/10.1088/1741-2552/aaf594
[93] Dhiman, R. (2022). Motor imagery signal classification using Wavelet packet decomposition and modified binary grey wolf optimization. Measurement: Sensors, 24: 100553. https://doi.org/10.1016/j.measen.2022.100553
[94] Su, J., Yang, Z., Yan, W., Sun, W. (2020). Electroencephalogram classification in motor-imagery brain–computer interface applications based on double-constraint nonnegative matrix factorization. Physiological Measurement, 41(7): 075007. https://doi.org/10.1088/1361-6579/aba07b