Classification of Glaucoma Optical Coherence Tomography (OCT) Images Based on Blood Vessel Identification Using CNN and Firefly Optimization

Classification of Glaucoma Optical Coherence Tomography (OCT) Images Based on Blood Vessel Identification Using CNN and Firefly Optimization

Komanduri Venkata Sesha Sai Rama KrishnaKosaraju Chaitanya Pedalanka Poorna Saraswathi Subhashini Rajesh Yamparala Satya Sandeep Kanumalli 

Department of CSE, Vignan’s Nirula Institute of Technology & Science for Women, Peda Palakaluru, Guntur 522009, Andhra Pradesh, India

Department of Electronics and Communication Engineering, RVR & JC College of Engineering, Guntur 522019, Andhra Pradesh, India

Corresponding Author Email: 
ramakrishnakvss149@gmail.com
Page: 
239-245
|
DOI: 
https://doi.org/10.18280/ts.380126
Received: 
17 June 2020
|
Revised: 
23 December 2020
|
Accepted: 
2 January 2021
|
Available online: 
28 February 2021
| Citation

© 2021 IIETA. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).

OPEN ACCESS

Abstract: 

Glaucoma is a chronic ocular neurodegenerative condition characterised by optic neuropathy and visual disturbance, corresponding to optic disc cupping and degeneration of optic nerve fibres. Globally 76 million people are suffered from glaucoma, it is an aggregate name of a gathering of eye conditions which can cause vision loss and in the end bring about visual deficiency by dynamic basic and useful harm to the optic nerve. It is one of the main reasons for visual deficiency. Early detection of this condition can reduce the progression of the disease and saves many users throughout the world. The detection and identification of glaucoma in an image are important for controlling the loss of the vision. Even there are numerous models for classification of glaucoma disease, the effected rate and prediction rate is less. But accurate identification is a foremost important thing. In order to train CNN, data augmentation and dropout were performed. A classifier was trained to identify the disc fundus images of healthy and glaucomatous eyes using feature vector representation of each input image to integrate the results from each CNN model, removing the second completely linked layer. In this manuscript convolutional neural network (CNN) is proposed with an optimization mechanism for classification of glaucoma OCT images using CNN based firefly optimization model. The proposed firefly based CNN model performs better when compared to the state of art mechanisms.

Keywords: 

convolutional neural network (CNN), firefly optimization, glaucoma, blood vessel

1. Introduction

The retina is a multi-layered structure that covers a large surface inside the eye. The function of the retina is to convert light to a neural response for further use by the brain. The retina contains two different types of photoreceptors: rods and cones. A healthy eye consists of around 60 million rods and around 3 million cones [1]. Rods are located at the peripheral part of the retina, they are responsible for peripheral vision, motion detection and the perception of light/dark contrast. Cones are mostly located in the macula luteal region of the retina, with most of the cones living at the central part of the macula luteal, the fovea centralism [2]. Cones allow for colour and central vision. The retina however consists of many layers, and photo receptors only constitute a small part of these. One could make use of several retinal imaging techniques. Examples of such techniques are Colour Fundus Photography (CF), Fundus Auto fluorescence (FAF), Near-Infrared Reflectance (NIR) and Optical Coherence Tomography (OCT). CF, FAF and NIR are en-face imaging techniques while OCT is a cross-sectional imaging technique. OCT allows for accurately measuring the thickness of the retina, something that is not possible using the other imaging techniques mentioned. CT is an imaging technique to create cross-sectional views of the retina non-invasively. OCT scans are often acquired as multiple linear slices [3]. Stacking these slices, it is possible to create a 3D view of the retina and its different layers. In OCT volumes 18 different retinal layers can be identified, An OCT scan is acquired by directing a beam of near-infrared light to both a mirror and the retina [4].

Diabetic retinopathy is a condition that damages the retina. This condition affects many people that suffer from diabetes. Properly monitoring and treating the eyes can reduce symptoms at 90 percent of new cases [5]. Diabetic retinopathy may cause macula edema, resulting in vessels leaking blood in the retina. OCT can show which areas are thickened when fluid accumulates. Other effects of diabetic retinopathy that can be seen in OCT images are retinal swellings and damaged nerve tissue [6].

A supervised classifier based on deep learning is a convolutional neural network (CNN) [7]. It consists of several convolutional layers as well as subsampling layers that are good at designing efficient filters to retrieve the classification task's sensitive image features. In this research, we adopted the CNN architecture, a 19-layer CNN model that is commonly used to address problems with image classification. The output layer was modified into a new softmax layer with two units sufficient for this task in order to distinguish healthy and glaucomatous eyes [8]. Transfer learning, on the other hand, is a method of machine learning to apply an established model to a new task domain for previous tasks.

The risk of internal bleeding may occur as the blood vessels are so gaunt by default. So, for different ophthalmologists, the measurement of the curvature of the vessels to detect DR is common. For medical image classification, supervised methods have great significance. There are two datasets of these methods, i.e., training set and test set [9]. A trained set consists of different images, e.g., vessels or non-vessels, labelled for a specific category. The test collection is the expert ophthalmologists' manual annotation of the dataset. The method of classification is aimed at dividing image pixels into types of blood vessels and non-vessels [10]. It uses different supervised classification techniques to achieve blood vessel segmentation, taking into account the feature structure of the image vessel. The efficiency metrics, neural networks, of the help vector machine methods. The general architecture of medical image data classification is indicated in Figure 1.

Figure 1. General architecture of medical image data classification

The pixel intensity of the larger blood vessels is identical to the intensity at the centre of the retinal image, so the centre will also be recognised as a vessel if the threshold value distinguishes the larger veins [11]. Thin vessels, on the other hand, have equal density at the bottom of the retina to the non-vessel pixels. While veins are distinct from the neighbourhood, it is not possible to find an acceptable global threshold value [12]. The first two features were chosen using these characteristics. The green intensity level of the central pixel and the average green intensity level of the entire surrounding block of size 5-5 were used as two features. Together, these two attributes provide valuable details. It would be darker than the average intensity of the block if the central pixel belonged to the vessel [13].

In this paper we are addressing the issue of retina image classification for blood vessel detection to identify the glaucoma decease. Mainly the OCT retinal images are classified using convolutional neural network (CNN), but alone CNN it will not give better results fir that we use firefly optimization mechanism.

2. Related Work

Serener and Serte [2] utilize a CNN on the Eye-PACS-1 dataset containing 9963 pictures from 4997 patients (a subset of Eyepatch dataset). The CNN utilizes a pre trained model to consolidate neighbouring pixels into local features and afterward summarizes them into global features. The neural system depends on Inception-v3 design. The framework prepares a solitary system to order fundus pictures into (1) moderate or more terrible DR, (2) extreme or more awful DR, (3) referable diabetic macular enema and (4) full gradable. Referable DR is characterized as a picture delegated 1, 3 or both.

Ahn et al. [3] present a DR reviewing framework which depends exclusively on micro aneurysm recognition to show up at a DR determination. The work presents an ensemble based micro aneurysm finder. Gathering is characterized as a lot of pre-handling and applicant extractor technique. Shetty and Gutte [4] present the posting of pre-preparing techniques and up-and-comer extractors, alongside the assessment measures for picking a group from a pool of gatherings.

Post distinguishing proof of micro aneurysm a yes/no choice of essence of DR is done, where nearness of any micro aneurysm is taken as proof of DR. Daneshvar et al. [5] present a programmed framework to analyse DR by examining shading fundus picture, utilizing AM-FM for feature extraction, and utilizing two factual classifiers, bolster vector machines (SVM) and fractional least squares (PLS) for characterization.

This framework relegates the pictures into two classes, DR and no-DR, and doesn't think about the seriousness. An et al. [6] have introduced a three-phase PC supported screening framework to distinguish and grade fundus pictures for the seriousness of DR. In this work, a lot of best 30 features out of 78 features are shortlisted by means of AdaBoost [12].

Khalil et al. [8] present a framework utilizing plentifulness adjustment recurrence tweak (AM-FM) to recognize typical and DR beset retina pictures. The work includes a multiscale decay of the locales in the fundus picture to group them into DR related sores and typical structures, which is trailed by dimensionality decrease and bunching. At long last they utilize halfway least squares strategy to show up at a DR finding alongside a related seriousness grade.

Another methodology introduced by Bashar [10] present the assessment of PC helped determination (CAD) framework for DR which joins the yield of various modules doing vessel division, optical circle location, red sore recognition and splendid injury identification to show up at a likelihood score to choose if the patient (or the patient information) be send to an ophthalmologist or not. The yields of the modules are joined and used to compute a gathering of features, which are contribution to a k closest neighbour (kNN) classifier to show up at an ultimate choice about the patient's assessment.

Deep image information extraction significantly utilizes convolutional based neural systems (CNN). Chen et al. [12] depicted a DR discovery gadget called IDx-DRX2.1, which is based upon the business gadget IDx-DR. They use shading fundus pictures in an AlexNet based CNN to characterize into sound and DR pictures, and further group DR pictures into various kinds of DR. The DR is characterized into referable DR, vision-compromising DR and proliferative DR. The framework utilizes numerous CNNs to distinguish haemorrhages, exudates and different injuries [13].

The Firefly Algorithm (FA) is based on the population and is in a hot spot. If our value is based on the contact of unusual agents, it is possible to identify algorithms as attracting or non-attracting. FA is a clear example, repeated by design, of attract-based algorithms. FA has proved to be an essential component of SI these days, which is implemented in many areas of engineering and optimization problems. The FA was introduced in 2007 by Sengupta et al. [14]. For the purpose of food selection, the algorithm was motivated by imitating the flashing behaviours of fireflies. Each firefly's attraction towards each other is indicated by their strength, which increases or decreases depending on the distances between the flies. In comparison with others, the light of the individual firefly. With low light intensity, high light intensity flies attract the flies, thus reducing the distance and modernising their own intensity. The most excellent solution to an objective function is the firefly with elevated brightness (intensity) and minimal distance. In many applications, the Firefly algorithm is used, such as minimum computation time for digital image compression, feature selection.

3. Proposed Work

The proposed mechanism works for identification of glaucoma in human retinas. For to detect we use blood vessel detection based on CNN and firefly optimization. CNN is a powerful mechanism to make image classification but it suffers from huge time in analysis of images. So we use an firefly optimization technique to improve the working style of CNN. The below section describes the working of firefly optimization method, CNN and finally we presented a mechanism of Firefly based CNN for detection of blood vessel for identification of retina disease. The proposed architecture is depicted in Figure 2.

Figure 2. Proposed architecture

There are around two thousand species, generally found in tropical and temperate regions, of the firefly algorithm. Most firefly species produce flashes that are unusual, brief and rhythmic [15]. The mechanism responsible for the flash of light is bioluminescence [16]. To attract mating partners and possible prey, these flashes are used. On the basis of the flash rate and length of time, these rhythmic flashes are different from each other [17]. Females respond to a male's unique blinking pattern, which forms a signal system that brings both sexes together [18]. It obeys the inverse square law when a light source emits light intensity from a given distance d. With an increase in the distance d, the light intensity I decreases in terms of i α 1/d2.

Firefly mechanism

We assume that any Firefly (FA) has 3 prominent characters

  1. Unisex in nature: Each of the Fireflies are attracted to one another
  2. Attractiveness: Attractiveness is proportional to brightness of, each one of these moves towards much brighter Fireflies if no such comparative brightness the firefly will move randomly in search of brightest.
  3. Objective function: The effectiveness, scope of the objective function are determined.

These algorithms are evaluated basing on the fitness function and mapped to the objective function which is attained through a maximization in terms of brightness. Depending on these three regulations the pseudo code for Firefly is

Firefly Algorithm

Objective function

f(x), x = (x1, ..., xd)T.

Generate an initial population of n fireflies

xi(i = 1, 2, ..., n).

Light intensity Ii at xiis determined by f(xi).

Define light absorption coefficient γ.

while (t <MaxGeneration),

for i = 1: n (all n fireflies)

for j = 1: n (all n fireflies) (inner loop)

if (Ii < Ij)

Move firefly i towards j.

end if

Vary attractiveness with distance r via exp[γr2].

Evaluate new solutions and update light intensity.

end for j

end for i

Rank the fireflies and find the current global best g.

end while

Post process results and visualization.

3.1 Convolutional neural network (CNN)

Inspired by the human brain with its billions of connections, deep neural networks have multiple layers of interconnected units called neurons [19]. Depending on the signals it receives as inputs, the neuron can be activated, producing another signal sent to another neuron [20]. The set of input signals are propagated through the middle layers, called hidden layers, and then to the output layer [21]. Neurons are the building blocks of neural networks.

Figure 3. Neuron structure

Figure 3 shows one of the most basic neuron structures. A set of input values are weighted and summed. This result is used as input for an activation function that determines how much the neuron will be activated by the received input signal. For this reason, the analogy with the human brain is strong, a biological neuron is activated by some input signal and propagates its output to others. In the image:

xi: is one of the input values.

wi: is one of the weights.

θ: is the activation threshold.

The activation function is just a "rule" to determine how much will be activated. There are different types of activation functions and the most common types are:

The function in figure is called sigmoid activation function and produces an output in the range [0, 1]. It has been reported that the networks that use this activation function may incur in vanishing or exploding gradient problem. The math formula for Figure 3 is:

$\sigma(x)=\frac{1}{1+e^{-x}}$

The hyperbolic activation function is similar to the sigmoid function, but its output range is [-1, 1]. Both tanh and sigmoid nonlinearities are defined as saturated due to the fact that they have a limited range of possible values. For that reason the usage of both tanh and sigmoid activation functions in feed forward neural network as hidden layer leads to vanishing gradient problems [7].

$\tanh (x)=\frac{2}{1+e^{-2 x}}-1$

Convolutional neural networks are well suited to tasks such as object recognition, image classification, and text analysis. They have been first introduced in 1989, but only in recent years, thanks to the increase in GPU power and to the availability of huge datasets for training, have been largely used in computer vision tasks. The key observation is that many natural signals are a composition of low-level features (edges form motifs, motifs assemble into parts, parts form objects [16]). These patterns can be found not only in images but also in speech and music.

The CNN structure is inspired by the visual cortex in animals, where groups of cells are sensitive to a small subregion of the input image. Therefore the image is not processed as a single block but as a composition of smaller features. The main difference with feed-forward neural networks is the absence of fully connected layers in the initial and central part of the architecture. Fully connected layers are only used in the final part to produce the output, the classification prob- ability distribution. From a computational perspective, this translates into models that require a smaller number of weights even for a large number of layers, and are therefore more tractable from the point of view of memory occupation. The training process is executed with backpropagation as in the feed-forward neural network. In basic CNN architectures, feature extraction is performed by a repeated pat- tern. First, a convolutional layer is applied to the input, then an activation function (generally the ReLU) and finally a Pooling layer, which reduces the information size Convolutional layers’ role is to find local conjunction of features from the previous layers. They don’t perform a simple matrix multiplication as the fully connected in a feed-forward neural network do, they rather execute a convolution.

Weights in a convolutional neural network are grouped in matrices called kernels (or fi). Though they have a smaller width and height with respect to the input, in basic CNN architectures they must match the depth. Considering the input layer, an image usually has three dimensions: width, height, and depth (that is generally three, following RGB encoding). Therefore the first set of filters must have a depth of three. The CNN layers are indicated in Figure 4.

Figure 4. Convolution

Pooling layers merge semantically similar features found in the previous activation map and help control overfitting. The most common layer is the Max- pooling which extract the maximum value. The max poolin process is indictaed in Figure 5.

Figure 5. Max pooling example

The different colors feature diff t areas of the input tensor to the polling layer. The max pooling takes the maximum number in those sections. The stride is typically of two. Activation layers apply an activation function to each element of their input (typi- cally a ReLU or LeakyReLY). The fully connected layer takes the output of convolution/pooling, flattens it and predicts the best label to describe the image. As in a normal feed-forward neural network, the inputs to the fully connected layer are multiplied by the weights and summed together. Then an activation function is used to produce the output. The results are propagated to the next fully connected layer. The last one has a neuron for each class label and it produces the probability distribution.

3.2 Proposed hybrid firefly based CNN algorithm

1: Input: Glaucoma data set

2: Output: classification of Glaucoma data set.

3: initialization of firefly’s population using eq 3

4: Fitness function evaluation for each firefly using eq-4.

5: for firefly xi calculate the fitness function.

6: select xbest and xworst.

7: while(!termination conditions) do

8: select sbest as alternative firefly, with a contending appropriateness and originate in a diverse region.

9: Compute an descendants solution xbest using (Algorithm5).

10: for all ((p → N) and (xp = xwosrt)) do.

11: for all ((q = p − r → q = p + r) and (xq = xwosrt)) do.

12: if (f (xq) > f (xp)) then.

13: Alter the firefly q utilizing Algorithm5 to show signs of improvement posterity firefly called xq

14: Update firefly q with the posterity xq

15: Perform step development of firefly q towards the ideal area utilizing (5).

16: end if

17: end for

18: end for

19: Refresh the most exceedingly awful arrangement xworst.

20: in the event that f (x best) > f (xbest) at that point.

21: xbest ← xbest.

22: end if

23: Rank the populace and change the best xbest and most noticeably awful xworst arrangements.

24: end while

25: return xbest

4. Experimental Setup

The proposed method has been tested on two datasets which are Standard Analysis of the Retina, STARE (81 images), and DIARETDB0(125 images) which is a dataset collected to detect the sign of retinopathy of adult patients. The images in two data sets include diabetic retinopathy symptoms, such as exudates, haemorrhages, and other OD abnormal appearances. The STARE image dataset sample is indicated in Figure 6.

Figure 6. Snap of STARE data set images

The Figure 7 represents the STARE image set with problematic blood vessel. STARE and DIARETDB0 datasets are organized into fair and poor groups. Images which have a bright and obvious boundary of the ODs are assumed as 'fair' and the rest is considered as 'poor'. There are total 81 images in STARE (31 fair quality images and 50 poor quality images) and total 125 images in DIARETDB0 (90 fair quality images and 35 poor quality images).

The proposed methods were implemented using ANACONDA, Spyder with pyrhon-3 and used matplotlib, numpy packages. Localization and segmentation results shown in the following sub-sections are obtained by running proposed mechanism on a PC (Intel(R) Core(TM) i7 - 4500U CPU @1.8GHz 2.4GHz).

Figure 7. Snap of STARE data set images with problematic blood vessel

Figure 8. Accuracy of classification

Figure 9. Precession of classification

Here Figure 8 shows the accuracy of classification for the two data sets STARE and DIARETDB0 for CNN, existing mechanism and proposed Firefly based CNN model. The figure shows that in the both cases that is for poor quality images and good quality images alone CNN and existing work fails to make good classification accuracy because of its construction. Here in our machinimas we obtained the best classification accuracy.

Here Figure 9 shows the precession of classification for the two data sets STARE and DIARETDB0 for CNN, existing mechanism and proposed Firefly based CNN model. The figure shows that in the both cases that is for poor quality images and good quality images alone CNN and existing work fails to make better precession because of its construction.

Here Figure 10 shows the recall of classification for the two data sets STARE and DIARETDB0 for CNN, existing mechanism and proposed Firefly based CNN model. The figure shows that in the both cases that is for poor quality images and good quality images alone CNN and existing work fails to make better recall because of its construction.

Here Figure 11 shows the F-score of classification for the two data sets STARE and DIARETDB0 for CNN, existing mechanism and proposed Firefly based CNN model. The figure shows that in the both cases that is for poor quality images and good quality images alone CNN and existing work fails to make better F1-score because of its construction.

Figure 10. Recall of classification

Figure 11. F1-score of classification

Figure 12. Computation time of classification

Here Figure 12 shows the computation time of classification for the two data sets STARE and DIARETDB0 on CNN, existing and proposed model. The figure shows that in the both cases that is for poor quality images and good quality images existing work fails to make time for made overall computation time because of firefly optimization.

5. Conclusion

An efficient technique to detect glaucoma is proposed in this work. The progression of glaucoma identification is necessary for the proper treatment of the disease. There are various work on fundus image and OCT image for the glaucoma diagnosis. So the combination of both fundus image and OCT image of the same eye can diagnose the glaucoma stage more accurately. CNN based classifier is more effective in working with image data. But it has suffered from adoption and it take huge time in processing. In order to reduce it we use firefly optimization technique. Hybrid Firefly-CNN effectively work with glaucoma data sets. The experimental results prove the effectiveness of proposed work.

  References

[1] Li, L., Xu, M., Liu, H., Li, Y., Wang, X., Jiang, L., Wang, N. (2019). A large-scale database and a CNN model for attention-based glaucoma detection. IEEE Transactions on Medical Imaging, 39(2): 413-424. https://doi.org/10.1109/TMI.2019.2927226

[2] Serener, A., Serte, S. (2019). Transfer learning for early and advanced glaucoma detection with convolutional neural networks. 2019 Medical Technologies Congress (TIPTEKNO), Izmir, Turkey, pp. 1-4. https://doi.org/10.1109/TIPTEKNO.2019.8894965

[3] Ahn, J.M., Kim, S., Ahn, K.S., Cho, S.H., Lee, K.B., Kim, U.S. (2018). A deep learning model for the detection of both advanced and early glaucoma using fundus photography. PloS One, 13(11): e0207982. https://doi.org/10.1371/journal.pone.0207982

[4] Shetty, S.C., Gutte, P. (2018). A novel approach for glaucoma detection using fractal analysis. 2018 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET), Chennai, pp. 1-4. https://doi.org/10.1109/WiSPNET.2018.8538760

[5] Daneshvar, R., Yarmohammadi, A., Alizadeh, R., Henry, S., Law, S.K., Caprioli, J., Nouri-Mahdavi, K. (2019). Prediction of glaucoma progression with structural parameters: Comparison of optical coherence tomography and clinical disc parameters. American Journal of Ophthalmology, 208: 19-29. https://doi.org/10.1016/j.ajo.2019.06.020

[6] An, G., Omodaka, K., Hashimoto, K., Tsuda, S., Shiga, Y., Takada, N., Nakazawa, T. (2019). Glaucoma diagnosis with machine learning based on optical coherence tomography and color fundus images. Journal of Healthcare Engineering. https://doi.org/10.1155/2019/4061313

[7] Carrillo, J., Bautista, L., Villamizar, J., Rueda, J., Sanchez, M. (2019). Glaucoma detection using fundus images of the eye. 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA), Bucaramanga, Colombia, pp. 1-4. https://doi.org/10.1109/STSIVA.2019.8730250

[8] Khalil, T., Akram, M.U., Khalid, S., Jameel, A. (2017). An overview of automated glaucoma detection. 2017 Computing Conference, London, pp. 620-632. https://doi.org/10.1109/SAI.2017.8252161

[9] Sengar, N., Dutta, M.K., Burget, R., Ranjoha, M. (2017). Automated detection of suspected glaucoma in digital fundus images. 2017 40th International Conference on Telecommunications and Signal Processing (TSP), Barcelona, Spain, pp. 749-752. https://doi.org/10.1109/TSP.2017.8076088

[10] Bashar, A. (2019). Survey on evolving deep learning neural network architectures. Journal of Artificial Intelligence, 1(2): 73-82. https://doi.org/10.36548/jaicn.2019.2.003

[11] Vijayakumar, T. (2019). Comparative study of capsule neural network in various applications. Journal of Artificial Intelligence, 1(1): 19-27. https://doi.org/10.36548/jaicn.2019.1.003

[12] Chen, X., Xu, Y., Wong, D.W.K., Wong, T.Y., Liu, J. (2015). Glaucoma detection based on deep convolutional neural network. 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, pp. 715-718. https://doi.org/10.1109/EMBC.2015.7318462

[13] Alghamdi, H.S., Tang, H.L., Waheeb, S.A., Peto, T. (2016). Automatic optic disc abnormality detection in fundus images: A deep learning approach. Proceedings of the Ophthalmic Medical Image Analysis Third International Workshop, OMIA 2016, Athens, Greece, pp. 17-24. https://doi.org/10.17077/omia.1042

[14] Sengupta, S., Singh, A., Leopold, H.A., Gulati, T., Lakshminarayanan, V. (2018). Application of deep learning in fundus image processing for ophthalmic diagnosis-a review. Artificial Intelligence in Medicine, 102: 101758. https://doi.org/10.1016/j.artmed.2019.101758

[15] Grewal, P.S., Oloumi, F., Rubin, U., Tennant, M.T. (2018). Deep learning in ophthalmology: A review. Canadian Journal of Ophthalmology, 53(4): 309-313. https://doi.org/10.1016/j.jcjo.2018.04.019

[16] Abràmoff, M.D., Lou, Y., Erginay, A., Clarida, W., Amelon, R., Folk, J.C., Niemeijer, M. (2016). Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning. Investigative Ophthalmology & Visual Science, 57(13): 5200-5206. https://doi.org/10.1167/iovs.16-19964

[17] Colas, E., Besse, A., Orgogozo, A., Schmauch, B., Meric, N., Besse, E. (2016). Deep learning approach for diabetic retinopathy screening. Acta Ophthalmologica, 94(S256). https://doi.org/10.1111/j.1755-3768.2016.0635

[18] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z. (2016). Rethinking the inception architecture for computer vision. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, pp. 2818-2826. https://doi.org/10.1109/CVPR.2016.308

[19] Gargeya, R., Leng, T. (2017). Automated identification of diabetic retinopathy using deep learning. Ophthalmology, 124(7): 962-969. https://doi.org/10.1016/j.ophtha.2017.02.008

[20] Quellec, G., Charrière, K., Boudi, Y., Cochener, B., Lamard, M. (2017). Deep image mining for diabetic retinopathy screening. Medical Image Analysis, 39: 178-193. https://doi.org/10.1016/j.media.2017.04.012

[21] Yang, X.S. (2010). Firefly algorithm, stochastic test functions and design optimisation. International Journal of Bio-inspired Computation, 2(2): 78-84. https://doi.org/10.1504/IJBIC.2010.032124