Development of a Brain Controlled Assistive Feeding System with OpenBCI

Development of a Brain Controlled Assistive Feeding System with OpenBCI

Sunday A. Afolalu Aminone O. Alexander Olawale C. Ogunnigbo* Tin T. Ting

Mechanical and Mechatronics Engineering Department, Afe Babalola University, Ado-Ekiti 360231, Nigeria

Department of Mechanical Science, University of Johannesburg, Johannesburg 2094, South Africa

Faculty of Data Science and Information Technology, INTI International University, Nilal 71800, Malaysia

Corresponding Author Email: 
charles_olawale@yahoo.com
Page: 
215-223
|
DOI: 
https://doi.org/10.18280/jesa.580203
Received: 
19 September 2024
|
Revised: 
19 November 2024
|
Accepted: 
6 December 2024
|
Available online: 
28 February 2025
| Citation

© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Amyotrophic Lateral Sclerosis (ALS) Patients, individuals confined to respective homes, and those with upper limb disabilities frequently experience feeding issues and malnourishment. Asphyxia or choking can occur during feeding, which is frequently uncomfortable and time-consuming. Currently, these individuals are assisted in eating by robotic devices. All the same, persons with severe disabilities—such as sensory loss—or trouble with basic physical mobility should not employ assistive robots that need movement from the user. An amazing help in this area is a robotic system that is controlled only by brain signals. Therefore, a prototype of an electroencephalogram (EEG)-based feeding robot is proposed based on the specifications for a real-time helpful robot which is a Brain Computer Interface (BCI). An assistive technology called a feeding assistance robot is used to help people who are unable to independently move food from a container into their mouths. Feeding assistance robots have been introduced to help those who experience upper limb function loss due to cerebral palsy, spinal cord injuries, or amputations. These individuals may find it impossible to feed themselves. A set of experiments were carried out with healthy subjects to validate the proposed system and results are here presented. According to experimental data, the built system can do the necessary tasks in real-time with acceptable errors of an average of about 21% with a 77% overall accuracy for the system in performing the feeding of the users. With further supervision, this level of inaccuracy can be decreased or, in certain situations, completely eliminated.

Keywords: 

amyotrophic lateral sclerosis, disabilities, electroencephalogram, brain computer interface, feeding

1. Introduction

Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disease that affects motor neurons in the brain and spinal cord [1]. This condition leads to a gradual loss of muscle control, which significantly impacts the ability to perform daily activities, including eating and drinking [2]. The need for assistive technologies, such as a brain-controlled feeding system, arises from the challenges faced by individuals with ALS as their disease progresses. OpenBCI, an open source neurotechnology firm, provides a free to use and access scientific platform for sampling, analysing and displaying new electrical signals from the human body. It was created by Joel murphy and Conor Russomano in late 2013 [3]. Through open-source hardware and firmware at a cheap cost of implementation, an Open Brain-Computer Interface (OpenBCI) provides unparalleled flexibility and freedom. To build specialized drivers which possess cutting-edge functionalities and features, it takes advantage of sophisticated software development kits and strong hardware platforms. The performance of OpenBCI could still be severely lowered by a number of constraints [4]. In addition to supporting variable sampling rates, communication protocols, free electrode placements, and single marker synchronization, the framework manages a variety of distributed computing activities [5]. OpenBCI boards work with conventional Electroenephalogram (EEG) electrodes and can be used to measure and record the electric activity produced by the heart, brain, and muscles. The board which will be used in this project is the OpenBCI Ganglion Board which is WIFI and Bluetooth enabled to transmit signals [6].

With a brain-computer interface (BCI), devices can be operated by activating electrical activity in the brain. Applications of this technology are numerous and include neuromarketing and neuroeconomics, games and entertainment, security, cognitive state analysis frameworks for medical protocols, rehabilitation for people with motor disabilities, diagnosis of mental disorders, and emotion-based analysis. This project concerns itself with use of Brain-Computer Interface (BCI) for rehabilitation of people with motor disabilities, particularly of the limbs at the upper part of the human body [4]. Brain-computer interfaces, or BCIs, provide an AT interface that does not require movement by using data straight from the human [7]. BCIs have made it possible for people to control environmental aspects, type messages, and use an on-screen mouse. But few BCIs are used for daily requirements; the majority are employed in laboratories [8]. BCIs, like all AT, are intended to increase independence, facilitate involvement, and improve function. [9].

According to Millan et al. [10], the ability to manipulate the external world without utilizing the efferent pathways of the human nervous system offers a fresh modality of interaction that can accelerate and enhance the human sensor–effector loop. Recent years have seen a number of applications for brain–computer interface (BCI) research in several domains, including communication, environmental control, mobility and robotics, and neuro-prosthetics. The majority of ongoing BCI research projects are still in the stage of design validation or in-lab demonstration. Many problems need to be resolved before the demonstration systems can be turned into useful gadgets. For the system to be practical, it must offer a transmission rate that is high enough. This rate depends on the number of targets that can be chosen as well as the accuracy and speed of target identification. In order to be easily used in homes or hospitals, the system should be small and lightweight. Additionally, the system should be used quickly, with minimal user training needed, and with electrodes installed quickly [11].

Figure 1. An OpenBCI electrode cap for biosensing EEG [12]

The human body emits certain electrical signals from its part when performing a certain activity These electrical signals are Electroencephalography (EEG) which is from the brain, Electromyogram (EMG) which is from the muscles, Electrocardiogram (ECG or EKG) which is from the heart and Electrooculogram (EOG) which is from measuring the movement of the human eyes. These various signals measures bio-potentials, the electrical output of human. The signals are gotten by placing certain number of electrodes at the specific area concerned for study [12]. Figure 1 illustrates a typical application of OpenBCI.

Amyotrophic Lateral Sclerosis (ALS) presents significant challenges for patients and caregivers. The progressive muscle weakness leads to loss of mobility and difficulties with daily tasks, while dysphagia increases the risk of choking and malnutrition, often necessitating dietary adjustments or the use of feeding tubes [13]. Communication becomes problematic as speech muscles weaken, and many patients may eventually lose the ability to speak entirely, requiring alternative methods to communicate [14]. Respiratory complications, including shortness of breath and infection risks, are also common [15]. The emotional toll is considerable, with many experiencing depression and anxiety due to the loss of independence, leading to feelings of isolation [14]. Caregivers bear increased responsibilities and emotional strain, often resulting in financial burdens from the costs of care and assistive devices [15]. ALS requires a multidisciplinary approach to care, which complicates coordination and access to specialists [13]. The variable progression of the disease creates uncertainty, making it challenging to predict care needs and plan for the future [14]. Addressing these challenges is crucial for enhancing the quality of life for those affected by ALS[16].

By using a feeding robot, this study attempts to address the issues outlined above. The brain-controlled assistive feeding system stands out due to its direct neural control, user-centric design, real-time feedback, integration capabilities, focus on quality of life, and accessibility [17]. These innovations not only enhance the functionality of assistive feeding but also addresses the emotional and psychological needs of ALS patients, marking a significant advancement in the field of assistive technologies. The suggested feeding robot makes feeding easier, increases the degree of independence for those with severe disabilities, and enhances their quality of life. As a result, fewer workers are required to care for Nigerians who are disabled.

2. Materials and Methods

2.1 Materials

The materials and equipment required for the brain controlled assistive feeding system can be grouped into three (3) categories which are; the EEG sensing components which as associated with extracting the EEG signals as shown in Table 1, the feeding actuator components which propagate the feeding process of selected subjects as illustrated in Table 2 and software components as listed in Table 3.

Table 1. Equipment for EEG sensing and their usage

S/N

Product Name

Qty

Specifications

Use

1

Gold Cup Electrodes

30

26 guage stranded wire, 1 m or 1.5 m cable with color coding, one female header termination per cable, insulation made of PVC with an 80°C rating, and an overall OD of 1.45 mm/0.057

Plug to the subjects head to receive EEG signal from the brain

2

Openbci 4-Channel Ganglion Board

1

Power with 3.3v to 6v DC battery only, current draw: 14mA when idle & 15mA connected and streaming data, MCP3912 analog front end, LIS2DH 3axis Accelerometer, Simblee BLE radio module (Arduijno compatible), Board Dimensions 2.41” * 2.41”, SD card storage not supported, Mount holes are 3/16” ID, 0.8 * 2.166” on center

A biosensing device which is preprogrammed to sense and process data from EEG signals gotten

3

Ten20 Paste

2

35mm Cross, latex & PVC free, diaphoretic, hypoallergenic, single use, residue-free

Increases the accuracy of getting sampling data

4

Ganglion Dongle

1

Interface: Bluetooth 4.0, USB, and Bluetooth standards maximum transfer rate of 1.0 Mbps Chipset: CC2540 chip from Texas instruments, utilizing an integrated antenna 5v of voltage, 5v of power usage, and a maximum of 90mA radio frequency: 2.4 GHz, approved by the FCC, CE, and IC

Transfers EEG data got via WIFI to the OpenBCI GUI

Table 2. Components for the feeding actuation and their usage

S/N

Product Name

Qty

Specifications

Use

1

Cardboard/ Strawboard

Large

100mm*50mm

Serves as the casing for the system

2

Servomotors

5

40mm*19mm*43mm, weight:56g, operating speed: 0.17sec/60 degrees (4.8 no load), operational voltage 4.8-7.2 volts

Drives the robotic arm to achieve the desired motion

3

Gripper End Effector

1

107mm*98mm Material: Aluminum alloy, eight: 60.4g (without motors), maximum opening angle: 55mm

Actuator which picks up the food

4

Plastic Sterlize Spoon

1

5.4in *1.5in*0.43in

Actuator which puts the food in the subjects mouth for consumption

5

2 Dof Robotic Arm Setup

1

2 degree of freedom, 2 links. revolute joints, 2 servomotors (MG996r)

Actuator which assists in feeding the subject with any given food

6

3 Dof Robotic Arm Setup

1

3 degree of freedom, 3 links, revolute joints, 3 servomotors (MG996r)

Actuator which assists in feeding the subject with any given food

7

Arduino Uno

1

14 Digital i/o pins, 6 analog inputs, a USB Connection, a 16 Mhz ceramic resonator, a reset button and a power jack

Microcontroller to control the desired motion of the robotic arm with the characterized EEG signals gotten

8

Leds

3

5mm (THT)

RED-indicating error in feeding session

AMBER- indicating feeding is in session

GREEN- indicating the end of feeding session

9

Battery (Lithium Ion)

4

1100mAh. 3.4V

Power the system

10

Connector Wire

20

Heavy duty, 11.81” (300mm)

Tethering of the system

Table 3. Software integrated development environment (IDEs) and their usage

S/N

Name

Qty

Use

1

Openbci Gui 2023

1

Software for interpreting the data

2

MATLAB/SIMULINK R2023a

1

Software for characterization of the data

3

Solidworks/ Fusion 360 2023

1

CAD Software for designing the prototype of the physical system

4

Visual Studios 2022

1

Software for running the C++ code for the structure of external components

5

Arduino Ide 2023

1

Software for controlling the motion of actuators (robotic arm)

6

Fritzing 2023

1

Software for designing the Schematics of the hardware circuit of the system

EEG sensing equipment: These are the components responsible for the extracting, amplifying and transmitting of EEG signal data got from the subjects.

2.2 Methods

The international 10-20 electrode placement based the placing of the electrodes (dry/wet) on the subject’s scalp during recording of Multichannel/ EEG signals extract the signals with the OpenBCI GUI and perform feature extraction in MATLAB. A regression approach was used for data classification due to its ability to provide continuous outputs, interpret probabilities, handle non-linear relationships, and maintain simplicity and robustness [18], and amplification of signals. Communicating the processed signals using serial ports to the Arduino Uno Microcontroller then takes place. The International 10-20 Electrode Placement System is widely used for positioning electrodes in EEG due to its standardization and reproducibility, allowing for consistent comparisons across studies. It provides comprehensive scalp coverage and correlates electrode placement with specific functional brain areas, facilitating the interpretation of brain activity. Additionally, the system is flexible and user-friendly, making it accessible for researchers and clinicians alike. The Arduino Uno Microcontroller drives the robotic arm via motors places at joints to achieve the desired motion for the system at each session. Troubleshooting is performed and process comes to an end if no error is currently present in the system.

2.2.1 Design of the prototype

The first step for every great research is a rough paper sketch before it can then be standardized and explained better with Computer Aided Design (CAD) as shown in Figure 2. The CAD of the feeding system is developed to enable the investigator best understand the system, its joints and links assembling, the system has 6 degrees of freedom degrees, motion and path planning and other modifications was made to boost the efficiency and its kinematics. For smoother and more precise movements, which is essential for minimizing spills and ensuring user comfort, the arm has a payload that varies between 1-5kg. The system undergo configuration and a working small scale prototype was developed.

The block diagram required to create the brain-controlled assisted feeding system using OpenBCI is condensed and shown in Figure 3. To handle Nigerian cuisine, such as boiling rice, in a regular or unique food container, the design incorporates a basic robotic system with a dual-arm manipulator. The segmentation principle, one of the 40 innovative ideas of the TRIZ (Theory of innovative Problem Solving), allows us to split a self-feeding activity into two smaller tasks: grabbing and releasing food, and putting food in the user's mouth. Food is moved from a container on a table to the user's mouth using a spoon by the first robotic arm (Arm #1, a spoon-arm). Food is picked up by the second robotic arm (Arm #2, a grab-arm/grip-arm) and placed on the spoon of the spoon-arm (Arm #1). When the roles of two arms differ, it is possible to design the end-effectors of both arms in an efficient manner. A spoon is included in the spoon-arm's end-effector to deliver food to the user's mouth. A grab-arm's end-effector might be made to be able to pick up and release food, such as rice. It may be possible to pick up food by using the strangely shaped gripper seen in the figure, as it is not necessary for the gripper to come close to the user's lips.

The feeding robot uses a microcontroller unit (Arduino) to control a spoon-arm and a grab-arm as shown in the block diagram above with signals transmitted to the PC’s OpenBCI GUI from the Ganglion Board which extracts the EEG data. The efficiency of the microcontroller unit is heavily depending on the subjects’ familiarity with the system.

Figure 2. Overview of the robotic arm

Figure 3. Block diagram of the system

2.2.2 Configuration and development of the experimental setup

The configuration of the system involves virtual path planning and motion analysis to determine the best route to move in other to get to point A to Point B and back to point A. Point B being the users’ mouth and Point A being the robot origin. The forward kinematics and inverse kinematics of the gripper arm and spoon arm are calculated and a working equation formed. The system’s motors run on the pre-set instructions from the extracted and processed EEG signals from the user. To enable efficiency, the system is controlled manually before it is interfaced with the EEG signals to avoid errors in motions. The small scale prototype can be developed.

Pre- and post-experiment questionnaires to investigate the individuals' demographics and preferences are not used in the majority of BCI investigations, although in few instances they have been useful. Therefore, upon first contact, the potential participants were briefed about the procedure and asked to report any of the following conditions—if they did not report, they would be disqualified right away—including epilepsy, severe light sensitivity, skin allergies, a history of seizures, and a propensity for auras and migraines. This results from extended exposure to flashing lights at frequencies that may give susceptible people headaches or epileptic seizures. Before the experiment began, individuals were then required to complete a questionnaire and sign an informed and free permission form.

Before electrodes are placed on the subjects a formal and informal consent form was signed by the six (6) volunteering participants of this project. This is for strict compliance with already existing laws for subject testing and to ensure safety of participates during the projects’ trial sessions.

2.2.3 Modeling and simulation

The expected requirements for the system were modelled and simulated with the help of the OpenBCI GUI, Visual Studios, MATLAB and Solidworks/Fusion 360. The uses of these software were earlier stated in Table 3.

2.3 Performance evaluation and validation

For the overall evaluation of this system, the subjects selected was able to control the feeding system. The accuracy of result was observed to be proportional to the number of training sessions, meaning the more the subject gets familiar and comfortable with the system, the better accuracy values obtained.

2.3.1 Performance evaluation for determining user compatibility

The plethora of subjects selected for the testing phase of this project were selected based on lack of neural disability. This initiative is not open to people with neurological problems like lock-in syndrome or Amyotrophic Lateral Sclerosis (ALS), or those with severe disabilities including sensory loss or trouble with basic physical mobility.

Also, user compatibility depends on the users’ ability to access the motor imagery neurons in their brain to control the robotic arm for the feeding motion. This involved the use a 2 by 2 array of led lights (2 Red & 2 Blue) as a teach pendant for training users. The subjects must be trained to be able to switch on these led with brain signals before allowed to test the feeding robot system. This will be analysed in successful switching cycles, user compatibility and the results tabulated. There are six (6) users selected for this project. They are Akure Daniel (24M), Olatunji Setemie (20F), Ugwu Theresa (23F), Udueze Vivian (20F), Derik Frank (19M), Dare Bukola (21M). The compatibility ratio would be tabulated and a line of best fit drawn on the graph to determine future compatible users of the feeding system.

2.3.2 Performance evaluation for determining frequency information

The feature extraction technique and the classifier used depends mainly on the frequency-amplitude graph. This can be evaluated using Fast Fourier Transforms (FFTs). Through dissecting a signal into its individual spectral components, frequency information can be extracted from the signal. FFTs are used in fault analysis, quality control, and machine or system condition monitoring.

2.3.3 Performance evaluation for determining accuracy

The accuracy of result is proportional to the number of training sessions, meaning the more the subject gets familiar and comfortable with the system, the better accuracy value would be obtained. To calculate the accuracy rate for the robotic arm's feeding function, Eq. (1) was employed.

Accuracy Rate = (Number of Successful Feeding Attempts / Total Number of Feeding Attempts) × 100%           (1)

Error Rate = (Number of Failed Feeding Attempts / Total Number of Feeding Attempts) × 100%           (2)

And

Accuracy Rate + Error Rate = 100%

2.3.4 Performance evaluation for the degree of automation (DOA)

As the percentage of automated functions within the total functions of an installation or system, the degree of automation can be defined. This basically measures the level of independence of the system i.e. the need for human intervention in the feeding process. The calculation yields a value between 0 and 1, which is the ratio of the number of automated operations to the total number of operations that must be performed. Consequently, a system or device with partial automation—that is, one in which not all activities or functions are automated—has a degree of automation lower than 1. The study aims to improve the subjects’ independence, meaning a value close to 1 is the goal. The metric chosen for this performance evaluation is simply:

DOA = (Number of Decisions Made * Decision Complexity Weight) / Total Possible Decisions           (3)

where, Number of Decisions Made represents the total number of decisions the arm makes during a specific task or operation.

Decision Complexity Weight: This is a factor between 0 and 1 assigned to each decision based on its complexity (e.g., 0.2 for simple, 0.8 for complex). Defining "decision complexity" can be subjective. A ranking system would be based on factors like sensor data involvement, real-time vs. pre-programmed, and impact on task execution; Total Possible Decisions represents the total number of potential decisions the arm could make in a given scenario. It might be difficult to quantify this in real-world applications. Estimating "Total Possible Decisions" can be challenging. The system's capabilities, environment, and task variations would be considered.

3. Results and Discussion

3.1 User selection, compatibility and experience

User compatibility for this experiment is assessed through a series of 20 attempts to activate (switch on) an LED on a breadboard with signals from the users’ brain, simply by relaxing and concentrating i.e. 10 attempts for relaxation and 10 for concentration and focus. Each successful activation contributes to a compatibility percentile, calculated by simply dividing the number of successes by the total feeding attempts allotted. To qualify for participation, users must achieve a score of 85% or higher, which translates to successfully turning on the LED at least 17 times consecutively. This threshold ensures a baseline level of user proficiency for the experiment. The results obtained are presented in Table 4.

Table 4. System compatibility test results

Subject Name

Subject ID

Age (Sex)

Neural Disability Found

Compatibility Percentile (%)

Akure Daniel

ZBS001

24(M)

NIL

95

Dare Bukola

ZBS002

21(M)

NIL

100

Derik Frank

ZBS003

19(M)

NIL

85

Olatunj Setemie

ZBS004

20(F)

NIL

85

Udueze Vivian

ZBS005

20(F)

NIL

90

Ugwu Theresa

ZBS006

23(F)

NIL

85

3.2 Performance evaluation on subject feeding experience

The subject’s feeding experience was mainly evaluated from a Likert scale questionnaire given to volunteers to fill out via google sheet to assess their feeding experience and further improve the system. The results are further elaborated in Table 5 showing the questions ask, average gotten, standard deviation and relative standard error.

Table 5. Questionnaire report on the system feeding experience

Questionnaires

Score

Average

Standard Deviation

Relative Standard Error (RSE)

I feel comfortable using my current feeding system

4.667

0.471

10.102%

I feel independent using my current feeding system

4.500

0.500

11.111%

I expect this meal-assistance system to increase the independence of the user

4.167

0.687

16.492%

I expect this meal-assistance system to be satisfactory

4.667

0.745

15.972%

I expect this meal-assistance system to be comfortable

4.000

0.816

20.412%

I am comfortable with using technology

4.500

0.500

11.111%

I felt comfortable using the meal-assistance system

4.167

0.687

16.492%

I felt independent using the meal-assistance system

3.833

0.687

17.927%

The meal-assistance system provided significant help in eating

4.667

0.471

10.102%

The meal-assistance system successfully accomplished tasks

4.333

0.471

10.879%

The meal-assistance system was simple and easy to use

4.500

0.500

11.111%

I felt safe while using the meal-assistance system

4.333

0.471

10.879%

3.3 Performance evaluation on rate of successful feeding cycles

This entails the number of successful and failed feeding cycle, i.e. the system to receiving EEG data to its deployment and completion of feeding process. The total time taken to complete this cycle is also taken into account with other parameters. The result is illustrated in Figure 4.

Figure 4. Barchart of successful and failed feeding executions

Therefore, it can be concluded that out of 280 feeding cycles, a total of 216 was successful and 64 failed during the experiment days. This gives a rate of 0.77 successes and 0.22 failures per feeding cycle which is a good value to consider when using the system. The time taken to complete the feeding cycle was also taken into account. The normal time for feeding and with the aid of the machine were compared as shown in Tables 6 and 7 respectively.

Table 6. Time taken for each subject to complete the feeding cycle with ZBS

Subject ID

Time Taken to Complete a Feeding Cycle

Mean Time (Μ)

Standard Deviation (±Σ)

ZBS001

5.971

0.388

ZBS002

6.084

0.262

ZBS003

5.954

0.347

ZBS004

6.081

0.313

ZBS005

6.141

0.607

ZBS006

6.687

0.612

CONTROL (AIDA)

5.000

0.000

Table 7. Time taken for each subject to complete the feeding cycle without ZBS

Subject ID

Time Taken to Complete a Feeding Cycle

Mean Time (Μ)

Standard Deviation (±Σ)

ZBS001

4.541

0.388

ZBS002

4.654

0.262

ZBS003

4.524

0.347

ZBS004

4.651

0.313

ZBS005

4.711

0.607

ZBS006

5.257

0.612

CONTROL (HUMAN)

5.196

0.5419

The notable brain controlled feeding robot done by Chen et al. [19] gives a five (5) second completion rate which is close to the average time it takes individuals. The Zeroeth BCI system thus averaging a completion rate of about 6-7 seconds, standardizes its real time use and application although it takes ±two (2) seconds longer.

3.4 Performance evaluation on accuracy and frequency information of the system

The robotic arms were tested for 280 feeding attempts, and the following results were recorded:

Successful Attempts: 216

Failed Attempts: 64

Using the Eqs. (1) and (2), the accuracy rate and error rate are as follows:

Accuracy Rate=(216/280)×100%=77.1 %

Error Rate=(64/280)×100%=22.9%

Therefore, in this study the robotic arm had an accuracy rate of 77.1% and an error rate of 22.9% for the 280 feeding attempts.

In Figures 5 and 6, it was observed that from experiment day 7 through to the day 14, the values gotten were somewhat more accurate. This proves the point that the accuracy of a system increases as the user gets more familiar with it.

Also, the accuracy results were further broken down in the analysis by considering specific scenarios. For instance, assuming that out of the 280 attempts separately:

100 attempts involved feeding solid foods such as yam, with 91 successful attempts with consideration to shape of yam which inhibits its balance on the spoon arm.

100 attempts involved feeding grain foods such as rice, with 50 successful attempts because the robot runs twice to pick rice grains up due to their size.

40 attempts involved feeding Nigerian snacks such as puff foods, with 39 successful attempts.

40 attempts involved feeding liquids foods such as akamu/custard, with 0 successful attempts.

The accuracy rates for each sub-task are as follows:

Accuracy Rate for Solid Foods=(91/100)×100%=91%

Accuracy Rate for grain foods=(50/100)×100% =50%

Accuracy Rate for Nigerian snacks=(39/40)×100%=97.5%

Accuracy Rate for Liquid Foods=(0/40)×100%= 0%

This breakdown provides insights into the robotic arm's performance for different types of foods, which can be useful for identifying areas for improvement or additional training of the system. For the main experiment, Puff was used as the main feeding supplement for each trails.

3.5 Performance evaluation on degree of automation (independence) of the system

The degree of automation measures the level of independence of the system i.e., the need for human intervention in the feeding process. The calculation yielded a value between 0 and 1, which is the ratio of the number of automated operations (decisions) to the total number of operations that must be performed. Decisions are human and automatic. The automatic decisions involved: receiving of the EEG from users scalp, pre-processing of EEG signal, sending result serially to actuator GUIs, movement of servomotor 1(base servomotor of the gripper robot), movement of servomotor 2 (left servomotor of the gripper robot) after servomotor 1 has been moved, movement of servomotor 3 (right servomotor of the gripper robot) after servomotor 2 has been moved, movement of servomotor 4 (gripper servomotor of the gripper robot) after servomotor 3 has been moved, movement of servomotor 5(spoon servomotor of the gripper robot) after servomotor 4 has been moved, servomotor reversal, sending a “0” back to the signal processing GUI to stop the feeding motion. The complexity weight is assigned as shown in Table 8.

Figure 5. Graph of successful feeding executions

Figure 6. Graph of failed feeding executions

Table 8. Complexity weight for automated decisions

 

Automatic Decisions

Total

Decision Number

1

2

3

4

5

6

7

8

9

10

 

Complexity

5

5

5

4

4

4

4

5

5

5

 

Complexity Weight

1

1

1

0.8

0.8

0.8

0.8

1

1

1

42.8

Table 9. Complexity weight for human decisions

 

Human Decisions

Total

Decision Number

11

12

13

14

15

16

17

 

Complexity

4

5

4

1

2

4

4

 

Complexity Weight

0.8

1

0.8

0.2

0.4

0.8

0.8

18.8

The human decisions involved: focusing (concentrating to trigger the fp2 electrode on), placement of food in the path of the gripper arm, trigger movement of servomotors towards the mouth, opening of the mouth, chewing of the food, trigger movement of motors back to origin position, removing focus (not concentrating). The complexity weights for human decisions are assigned as shown in Table 9.

The degree of automation gotten is 0.65 which is close to one (1). DOA values typically range between 0 (no automation) and 1 (fully autonomous). Considering the 0-1 scale: With 0.695, the robotic arm exhibits a significant level of automation but might not be entirely autonomous. It likely makes its own decisions based on sensors and programming but might require some human intervention or operate within predefined constraints. The robotic arm makes complex decisions using sensors but might require occasional human monitoring or operate within pre-defined set boundaries for the system.

With a DOA of 0.695, the user likely has limited independence in feeding with the robotic arm. This suggests the arm performs tasks with a significant degree of automation, but the user/investigator might still be involved in some aspects such as: monitoring the arm's operation, especially during critical phases, to ensure it functions as expected; Intervening Depending on the specific application, the user/investigator might need to intervene in case of unexpected situations, errors, or changes in the environment that the system can't handle independently; Input/output managing in some cases where the investigator might be responsible for providing input (e.g., initial setup parameters) or receiving output (e.g., task completion notification) from the system.

Hence, applying Eq. (3) using the results from Tables 8 and 9, Degree of Automation (DOA) = (42.8)/ (42.8+18.8).

DOA = 0.695.

3.6 General response of the system

The system overall performance at the first iteration was satisfactory, but with a number of adjustments and modifications to be made to the brain controlled feeding system. Figure 7 describes the feeding system undergoing training session while Figure 8 is the developed Zeroeth BCI feeding system.

Figure 7. Real-time test session with the feeding system

Figure 8. The Zeroeth BCI feeding system

4. Conclusion

Self-feeding may not be achievable without assistance for those who suffer from cerebral palsy, amputations, spinal cord injuries or any neural deficiency that results in the total or partial loss of upper limb functions. Meal assistance robots have been deployed to restore independence to these persons. A real-time feeding robot controlled by the brain was proposed in this study. A system prototype was created that could be controlled in real time by brain impulses. Six healthy young subjects voluntary participated in the experiment to validate the proposed the real-time brain-controlled feeding robot system.

The system's accuracy could reach 77±5%. For individuals who prefer to dine slowly, making one decision every five seconds is a reasonable pace. The idea of assistive feeding robots in general is surrounded by a number of objections because of the possible users' conditions. The feeding interval is a regular issue for those who are incapable. Sometimes there is not much time between meals, yet the user still needs time to chew and swallow the food. Occasionally, they feel too exhausted or unable to continue eating, so they would rather take a quick nap after a few spoonful. For the following reasons, assistive feeding robots are beneficial. First of all, while they are not focused on their caregiver setting the next spoon ready for service, users can chew their food thoroughly. Additionally, individuals are free to eat whenever they wish. With the help of the feeding robots, users can eat on their own whenever and however often they choose throughout the day.

Acknowledgment

This work is supported by the Department of Mechanical & Mechatronics Engineering, Afe Babalola University Ado-Ekiti, Nigeria.

Nomenclature

ZBS

Zeroeth BCI System

AT

Assistive Technology

EEG

Electroencephalography

EOG/EKG

Electrooculogram

BCI

Brain Computer Interface

EMG

Electromyogram

DOA

Degree of Automation

  References

[1] Nijssen, J., Comley, L.H., Hedlund, E. (2017). Motor neuron vulnerability and resistance in amyotrophic lateral sclerosis. Acta Neuropathologica, 133: 863-885. https://doi.org/10.1007/s00401-017-1708-8

[2] Majmudar, S., Wu, J., Paganoni, S. (2014). Rehabilitation in amyotrophic lateral sclerosis: Why it matters. Muscle & Nerve, 50(1): 4-13. https://doi.org/10.1002/mus.24202

[3] Browarska, N., Kawala-Sterniuk, A., Chechelski, P., Zygarlicki, J. (2020). Analysis of brain waves changes in stressful situations based on horror game with the implementation of virtual reality and brain-computer interface system: A case study. Bio-Algorithms and Med-Systems, 16(4): 20200050. https://doi.org/10.1515/bams-2020-0050

[4] Cardona-Álvarez, Y.N., Álvarez-Meza, A.M., Cárdenas-Peña, D.A., Castaño-Duque, G.A., Castellanos-Dominguez, G. (2023). A novel OpenBCI framework for EEG-based neurophysiological experiments. Sensors, 23(7): 3763. https://doi.org/10.3390/s23073763

[5] Augusto, D.G., Murdolo, L.D., Chatzileontiadou, D.S., Sabatino Jr, J.J., Yusufali, T., Peyser, N.D., Hollenbach, J.A. (2023). A common allele of HLA is associated with asymptomatic SARS-CoV-2 infection. Nature, 620(7972): 128-136. https://doi.org/10.1038/s41586-023-06331-x

[6] Pate, J. (2014). Brainwriter helps graffiti artist suffering from ALS to draw using OPENBCI. Open Health News. https://www.openhealthnews.com/news-clipping/2014-09-16/brainwriter-helps-graffiti-artist-suffering-als-draw-using-openbci.

[7] Nordness, P.D., Epstein, M.H., Synhorst, L. (2009). Convergent validity and test-retest reliability of the preschool behavioral and emotional behavior rating scale: Parents as respondents. Special Education and Communication Disorders Faculty Publications, 3(1): 51-60.

[8] Nijboer, F., Sellers, E.W., Mellinger, J., Jordan, M.A., Matuz, T., Furdea, A., Kübler, A. (2008). A P300-based brain-computer interface for people with amyotrophic lateral sclerosis. Clinical Neurophysiology, 119(8): 1909-1916. 

[9] Cincotti, F., Quitadamo, L.R., Aloise, F., Bianchi, L., Babiloni, F., Mattia, D. (2009). Interacting with the environment through non-invasive brain-computer interfaces. In Universal Access in Human-Computer Interaction. Intelligent and Ubiquitous Interaction Environments, Springer, Berlin, Heidelberg, pp. 483-492. https://doi.org/10.1007/978-3-642-02710-9_53

[10] Millan, J.R., Renkens, F., Mourino, J., Gerstner, W. (2004). Noninvasive brain-actuated control of a mobile robot by human EEG. IEEE Transactions on Biomedical Engineering, 51(6): 1026-1033.

[11] Chen, X., Wang, Y., Zhang, S., Gao, S., Hu, Y., Gao, X. (2017). A novel stimulation method for multi-class SSVEP-BCI using intermodulation frequencies. Journal of Neural Engineering, 14(2): 026013. https://doi.org/10.1088/1741-2552/aa5989

[12] Frank, J.A., Antonini, M.J., Anikeeva, P. (2019). Next-generation interfaces for studying neural function. Nature Biotechnology, 37(9): 1013-1023. https://doi.org/10.1038/s41587-019-0198-8

[13] McFarland, D.J. (2020). Brain-computer interfaces for amyotrophic lateral sclerosis. Muscle & Nerve, 61(6): 702-707. https://doi.org/10.1002/mus.26828

[14] Kawala-Sterniuk, A., Browarska, N., Al-Bakri, A., Pelc, M., Zygarlicki, J., Sidikova, M., Gorzelanczyk, E.J. (2021). Summary of over fifty years with brain-computer interfaces—A review. Brain Sciences, 11(1): 43. https://doi.org/10.3390/brainsci11010043

[15] Korovesis, N., Kandris, D., Koulouras, G., Alexandridis, A. (2019). Robot motion control via an EEG-based brain–computer interface by using neural networks and alpha brainwaves. Electronics, 8(12): 1387. https://doi.org/10.3390/electronics8121387

[16] Amusat, N. (2009). Disability care in Nigeria: The need for professional advocacy. African Journal of Physiotherapy and Rehabilitation Sciences, 1(1): 30-36. https://doi.org/10.4314/ajprs.v1i1.51313

[17] Akuthota, S., Janapati, R.C., Kumar, K.R., Gerogiannis, V.C., Kanavos, A., Acharya, B., Desai, U. (2024). Enhancing real-time cursor control with motor imagery and deep neural networks for brain-computer interfaces. Information, 15(11): 702. https://doi.org/10.3390/info15110702

[18] Stephan, C.E., Rogers, J.W. (1985). Advantages of using regression analysis to calculate results of chronic toxicity tests. In Aquatic Toxicology and Hazard Assessment: Eighth Symposium, ASTM STP, pp. 328-338. 

[19] Chen, S.C., Wu, C.M., Zaeni, I.A., Chen, Y.J. (2018). Applying fuzzy decision for a single channel SSVEP-based BCI on automatic feeding robot. Microsystem Technologies, 24: 199-207. https://doi.org/10.1007/s00542-016-3229-0