Special Issue: Advances of Machine Learning and Deep Learning

Special Issue: Advances of Machine Learning and Deep Learning

Machine learning mainly designs and analyzes algorithms that allow computers to learn autonomously. These algorithms enable machines to establish rules through automatic data analysis, and rely on these rules to predict unknown data. Traditionally, machine learning encompasses the following phases: problem definition, information collection, model development, and results verification. This traditional approach is difficult to meet the needs of artificial intelligence (AI), which has drastically changes the application scenarios of machine learning. Through feature engienering, deep learning breaks the limits of machine learning, and realizes astonishingly superior performance, making a number of extremely complex applications possible.

Deep learning is relatively mature in the field of supervised learning. But the research in the other fields of deep learning has just started, especailly in unsupervised learning and reinforcement learning. Deep learning performs excellently in speech recognition and image recognition. Two common models, Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), are best instances of deep learning. Nonetheless, deep learning also face some challenges in problem-solving. Deep learning algorithms require a large volume and diverse range of data, and call for the tuning of numerous parameters. Furthermore, overfitting may occur if deep learning models are well-trained, making the models inapplicable in other areas. What is worse, the training of deep learning models remains a black box problem, where the learning and deduction processes are unknown to researchers.

This Special Issue intends to boost the performance and transparency of deep learning models, and to maximize their feasibility in real-world applications and facilities. Special attention will be given to the following topcis, methodologies, and techniques:

  • Complexity reduction of parameters in deep-learning models

  • Enhanced interpretation and reasoning methods that explain the hidden components and shed light on the outputs of deep learning models

  • Incremental self-adaptation and evolution of deep learning models, with recursive updates of weights and pruning of internal structures

  • Combination between new deep learning methods in combination with renowned, widely-used architectures

  • New deep learning methods for soft computing and AI environments

  • Emerging applications and new developments of established applications of deep learning in big data, IoT, social media data mining, web applications

  • New methods for artificial neural networks in combination with deep learning

  • New learning methods for established deep learning architectures

  • Faster and more robust methods for learning of deep models

  • Reasoning of ınput-output behavior of deep learning models

  • Deep learning classifiers combined with active learning

  • Evolutionary–based optimization and tuning of deep learning models

  • Transfer learning for deep learning systems

Guest Editors

Prof. Dr. Taner Cevik, Nisantasi University,Turkiye (taner.cevik@nisantasi.edu.tr)

Assoc. Prof. Dr. Fatih Özyurt, Firat University, Turkiye (fatihozyurt@firat.edu.tr)

Deadline for manuscript submission: May 31, 2023. 

All manuscripts and any supplementary material should be submitted via the OJS of the Traitement du Signal journal, available at http://www.iieta.org/Journals/TS/Submission. When submitting the paper please select " Advances of Machine Learning and Deep Learning" as the article type.