Federated Learning for Secure and Privacy-preserving AI in Cloud Computing Environments

Federated Learning for Secure and Privacy-preserving AI in Cloud Computing Environments

The creation and use of ethical artificial intelligence technology is becoming more and more important given the quick advancements in AI technologies and uses. A destructive technique for the collection and integration of this data is not permitted due to secrecy, protection, and data governance restrictions. Thus, maintaining user privacy while attaining efficient algorithms is a significant challenge. Federated learning has gained a lot of interest lately since it allows users to jointly build a common model without disclosing their local data. Evaluations have nevertheless revealed that attackers may nevertheless exploit commercial applications—like self-driving directions, ubiquitous medical data, and automated decision-making—by taking advantage of the common characteristics. Neural network technology is becoming more and more commonplace across multiple sectors. One kind of advanced training is federated learning. Each client and cloud server collaborate to train a single person in federated learning. The architecture of a safe distributed computing environment is made possible by federated learning. It enables many parties to cooperate together to train a device computing model without the need for direct data sharing, providing each party with a better model than they could obtain working alone.

In particular, we first present a federated learning system with edge-cloud support for communication-efficient and secure user resource data sharing in effective systems. Transportation and response times are prolonged in the typical cloud architecture because data must be sent to the cloud for processing. Edge networks appear when conditions call for them. Processing information on peripheral nodes can speed up response times and cut down on transmission delays. The necessity of edge network artificial intelligence has been raised lately. Nevertheless, an individual impact node's data is constrained and does not meet artificial intelligence requirements. These clever algorithms are mostly predicated on the underlying computing paradigm, in which computer activities are executed by pooling all user data into a single cloud. Because of the access to the environment, this presents a serious risk to personal privacy. This employs a federated learning process and a privacy-preserving social computing architecture to handle this difficulty. To avoid vulnerability, user data is stored on several user devices. In an iteration, a set of characteristics is pretrained for every terminal and then moved to the central cloud for updating.

This special issue explores the latest developments in federated learning to tackle this issue within the framework of computing that protects privacy. Federated learning provides strong incentive structures, substantial security and privacy protection, and the ability to train and employ global AI models across multiple decentralised data sources.

Contributions are invited on, but not restricted to, the following:

  • Moving towards responsible artificial intelligence federated learning for privacy-preserving technology
  • Federated learning for privacy-preserving cloud computing in intelligent grids
  • Cloud computing-based federated learning for privacy-preserving Web of medical things
  • Robust privacy protection combined with incredibly effective federated learning in cloud computing
  • Safe and effective federated learning for edge-to-cloud cooperation in smart grids
  • Sequential federated learning tackle  preserves privacy for cloud computing
  • A federated learning-based privacy-preserving cloud computing system for medical oversight
  • Integrating homomorphic reconfiguration with privacy-preserving federated learning for clinical diagnostics
  • A secure performance setting and privacy-preserving federated learning method
  • Federated Learning with Highly Authorised Centres: Credible and Privacy Preserving
  • A securely federated learning technique for pluralistic data exchange in network IoTs


  • Deadline for manuscript submissions will be September 30, 2024
  • Expected publication date (tentatively) will be July 2025

Guest Editors:

Dr.Muhammad Zunnurain Hussain, Bahria University Lahore Campus, Pakistan.

Email: Zunnurain.bulc@bahria.edu.pk, zunnurain.bulc@hotmail.com

Dr.Sushank Chaudhary, Chulalongkorn University, Thailand.

Email: sushankchaudhary@gmail.com, sushank.c@chula.ac.th

Dr.Mohd Izuan Hafez Bin Ninggal, Universiti Putra Malaysia, Malaysia

Email: mohdizuan@upm.edu.my