© 2025 The authors. This article is published by IIETA and is licensed under the CC BY 4.0 license (http://creativecommons.org/licenses/by/4.0/).
OPEN ACCESS
Autonomous systems are transforming robotics by enabling machines to operate with minimal human intervention. These systems are now applied across a wide range of domains, including industry, healthcare, agriculture, and defence. This review presents a comprehensive analysis of emerging trends and technologies driving the evolution of autonomous systems. Key areas explored include perception, localization, path planning, learning, control, and human–robot interaction. We examine how artificial intelligence and machine learning are used for robust decision-making, as well as recent advances in sensor fusion and Simultaneous Localization and Mapping for environment mapping. Innovative techniques in motion planning and intuitive interfaces are also discussed. Special attention is given to swarm robotics and bio-inspired algorithms that enable scalable and decentralized coordination. The review includes comparative analyses of algorithms, hardware platforms, and real-world use cases. These comparisons highlight current capabilities and existing limitations. Despite considerable progress, challenges remain in ensuring scalability, achieving real-time responsiveness, and maintaining robustness in unstructured environments. Ethical and legal concerns also present ongoing barriers to deployment. Looking ahead, several transformative technologies are emerging. These include quantum computing for solving complex optimization tasks, edge AI for localized intelligence, and 6G connectivity for ultra-fast communication. Together, these technologies are expected to open new frontiers in autonomy and system integration. This paper underscores the need for interdisciplinary research to build autonomous systems that are intelligent, resilient, and socially responsible.
autonomous systems, robotics, artificial intelligence, path planning, human–robot interaction, reinforcement learning
Autonomous systems are intelligent machines or software agents capable of performing tasks with minimal or no human intervention. These systems leverage advancements in artificial intelligence (AI), machine learning (ML), sensor fusion, and real-time decision-making to perceive, interpret, and respond to dynamic environments [1, 2]. Their growing sophistication enables them to operate across various levels of autonomy ranging from semi-autonomous assistance to full autonomy making them critical in modern technological ecosystems.
Over the past decade, robotics and autonomous systems [3-6] have witnessed a significant rise across diverse sectors. In industrial manufacturing, autonomous systems have optimized production processes, enabling flexibility, precision, and cost-efficiency [7, 8]. In healthcare, surgical robots and assistive systems have transformed diagnostics and minimally invasive procedures, enhancing accuracy and patient outcomes [9, 10]. Defence applications have benefited from robust unmanned systems for reconnaissance, logistics, and combat support [11]. Meanwhile, agriculture has embraced smart farming through AI-driven robotics that support crop monitoring, harvesting, and yield prediction [12, 13]. Other domains such as food supply chains [14], logistics [15], and social environments [16] are increasingly integrating autonomous capabilities for enhanced efficiency and resilience.
The goal of this review is to examine the most recent trends and breakthroughs in autonomous robotics, focusing on the technological, functional, and application-driven aspects. By synthesizing findings from a wide range of domains and highlighting emerging technologies such as soft robotics [17], swarm intelligence [18], and AI-enhanced control systems [19], this paper aims to provide a panoramic view of where the field is heading. The review also identifies persisting challenges in reliability, safety, ethics, and system integration key hurdles to widespread adoption and scalability [20, 21].
This paper is organized as follows: Section 2 presents the technological foundations of autonomous systems, covering areas such as artificial intelligence (AI), machine learning (ML), embedded systems, and sensing technologies. Section 3 explores the diverse application domains of autonomous systems, including healthcare, agriculture, defence, logistics, and industrial automation. Section 4 discusses current challenges facing the field, such as issues of trust, ethics, energy efficiency, and the need for supportive regulatory frameworks. Section 5 reviews emerging paradigms that are shaping the next generation of autonomous systems, including soft robotics, neuromorphic computing, swarm robotics, and the concept of Industry 5.0. Finally, Section 6 concludes the paper by offering insights into future directions, identifying key research gaps, and highlighting potential breakthroughs that could drive the field forward.
To understand the advancements in autonomous systems, it is essential to first define the foundational concepts that underpin robotic autonomy and the classification of autonomous systems.
2.1 Key concepts
Autonomy in robotics refers to the degree to which a robot can perform tasks independently of human control or input. Autonomous systems perceive their environment, make decisions, and execute actions using algorithms, sensor data, and actuators. The concept extends beyond mere automation by enabling autonomous systems to adapt to dynamic and unstructured environments [1, 22].
Levels of Autonomy vary across a spectrum from fully manual systems to fully autonomous ones. These levels are often categorized similarly to the SAE levels for autonomous vehicles, ranging from:
Autonomous operation relies on several core functional modules:
2.2 Classification of autonomous systems
Autonomous systems come in diverse forms based on their structure, mobility, and application. The primary classes include:
Autonomous robotics has seen significant evolution driven by advances in machine learning, perception, control systems, and collaborative intelligence. This section presents key technological trends that are shaping the future of autonomous systems.
3.1 Machine learning and AI in autonomy
The integration of Artificial Intelligence (AI) and Machine Learning (ML) has revolutionized autonomy, enabling autonomous systems to perceive, decide, and act more effectively in complex environments.
3.1.1 Deep learning for perception
Deep learning techniques, particularly convolutional neural networks (CNNs), have significantly improved robot perception by enhancing capabilities in object detection, semantic segmentation, and scene understanding [1, 19]. These advancements allow autonomous systems to interpret noisy or unstructured data from cameras, LiDAR, and other sensors with higher accuracy and reliability.
3.1.2 Reinforcement learning for decision-making
Reinforcement learning (RL) enables autonomous systems to learn optimal behaviours through trial and error, offering solutions for navigation, manipulation, and human-robot interaction. RL is particularly valuable in environments where traditional rule-based systems fail due to unpredictability [28, 29]. Hybrid models combining deep learning [19] with RL often referred to as Deep RL are increasingly used in autonomous vehicles, drones, and robotic games.
3.2 Sensor fusion and SLAM
3.2.1 Multi-sensor integration
Modern SLAM systems rely on sensor fusion, combining data from cameras, IMUs, LiDAR, and GPS to improve robustness and accuracy in challenging conditions [14, 25]. Redundancy and complementary sensing enhance autonomy in low-light, dusty, or GPS-denied environments. A notable real-world example is Boston Dynamics' Spot, which integrates stereo cameras, 3D LiDAR, and IMUs using visual-inertial SLAM to achieve centimetre-level localization accuracy in dynamic industrial sites such as construction zones, power plants, and underground tunnels. This multi-sensor approach allows the robot to maintain stable navigation even in cluttered and unstructured environments.
Similarly, Clear path Robotics’ Husky UGV employs sensor fusion with RGB-D cameras, LiDAR, and RTAB-Map-based SLAM to enable precise autonomous navigation in outdoor field robotics tasks such as mining surveys and precision agriculture. By fusing GPS with local SLAM inputs, the system mitigates drift and maintains accuracy over large, uneven terrains.
3.2.2 Advances in visual-inertial odometry
Visual-Inertial Odometry (VIO) techniques integrate visual and inertial data for real-time pose estimation. These have seen rapid improvements due to more efficient algorithms and dedicated hardware, contributing to robust indoor and outdoor navigation [26]; Liu [2]. For instance, drones like the DJI Matrice 300 RTK use VIO in conjunction with real-time kinematic GPS and LiDAR to operate in complex airspace environments, including near infrastructure and beneath tree canopies, where conventional GPS signals are degraded. This fusion enables real-time trajectory planning, obstacle avoidance, and terrain-adaptive flight paths.
These industry applications demonstrate how sensor fusion and SLAM are no longer confined to academic settings but are being actively deployed in mission-critical systems across diverse domains.
3.3 Motion planning and navigation
Autonomous navigation depends on the robot’s ability to plan and follow paths dynamically in real-world environments.
3.3.1 Dynamic path planning
Autonomous systems must plan trajectories that avoid obstacles, minimize time, and adapt to changes in real-time. Algorithms like RRT*, A*, and D* Lite remain prevalent but are increasingly enhanced with predictive models and context-awareness [24, 27].
3.3.2 Learning-based planners
Emerging motion planning approaches integrate learning-based methods, allowing autonomous systems to generalize from past experiences. For example, imitation learning enables autonomous systems to replicate expert demonstrations, while RL-based planners optimize for reward-driven behaviours in uncertain conditions [28, 29].
3.4 Human–robot interaction
As autonomous systems [35] move into shared spaces, interaction with humans becomes critical to their acceptance and effectiveness.
3.4.1 Intuitive interfaces
The development of natural interfaces such as voice commands, gesture recognition, and AR/VR tools makes autonomous systems more accessible and operable by non-experts. Social autonomous systems and assistive systems often use emotional cues and expressive behaviours to enhance communication [16, 36].
3.4.2 Trust and safety in shared workspaces
Building trust in autonomous systems requires transparency, predictability, and safety assurance. Research has shown that human trust increases when autonomous systems explain their actions or intentions [20, 37]. Collaborative autonomous systems (cobots) are increasingly equipped with safety protocols and force-limited actuators to enable close human interaction without risk.
3.5 Swarm and multi-agent systems
Swarm robotics leverages decentralized control and local interactions to coordinate large groups of simple agents.
3.5.1 Decentralized control
Swarm systems operate without a central controller, using algorithms inspired by nature (e.g., ants, birds) to achieve complex collective behaviours such as formation flying, area coverage, and object transport [12, 18].
3.5.2 Bio-inspired algorithms
Bio-inspired strategies such as particle swarm optimization, ant colony algorithms, and neural-based coordination improve robustness and fault tolerance. These methods are especially suited for environments where scalability and redundancy are critical [17, 28].
3.6 Applications and case studies
Autonomous systems have become integral across several domains, each showcasing unique requirements and challenges.
3.6.1 Autonomous vehicles
Autonomous vehicles (AVs) incorporate advanced perception, planning, and control systems. Key challenges include perception under adverse weather, ethical decision-making, and regulatory compliance [24, 36]. The use of AI, V2X communication, and predictive QoS is driving rapid innovation [38].
3.6.2 Service autonomous systems in healthcare
Healthcare autonomous systems assist in surgery, rehabilitation, eldercare, and disinfection tasks. Surgical autonomous systems [39] now support semi- and fully autonomous operations, improving precision and reducing fatigue [9, 11]. Human-centric design and compliance with medical standards remain critical.
3.6.3 Agricultural robotics
In agriculture, autonomous systems perform crop monitoring, weeding, harvesting, and soil analysis. These autonomous systems rely heavily on machine vision, AI, and mobility for operation in unstructured environments [14, 40]. Smart farming integrates these systems into broader IoT ecosystems for precision agriculture.
A meaningful understanding of autonomous systems [41, 42] necessitates evaluating their core components algorithms, hardware, and real-world applications through systematic comparison. This section synthesizes insights from the literature to highlight trade-offs, strengths, and ongoing challenges across key dimensions.
4.1 Algorithm comparison
Table 1 provides a comparative overview of key algorithm types used in autonomy, detailing their purposes, advantages, limitations, and typical real-world use cases.
Table 1. Comparative overview
|
Algorithm Type |
Purpose |
Advantages |
Limitations |
Typical Use Cases |
|
Deep CNNs (e.g., YOLO, ResNet) |
Visual perception, object detection |
High accuracy; real-time capable with hardware |
Requires >10,000 labelled images for effective training; performance drops in low-light/noisy environments |
LiDAR-based obstacle detection in urban autonomous vehicles; pedestrian recognition in hospital assistance robots; object classification in UAV surveillance |
|
SLAM (e.g., ORB-SLAM, RTAB-Map) |
Mapping and localization |
Robust in unknown environments |
Degrades under >30% dynamic scene changes; sensitive to <20 lux lighting conditions |
Indoor navigation for warehouse robots; real-time localization in AR headsets; terrain mapping by agricultural drones |
|
RRT*, A* |
Motion planning |
Efficient pathfinding; well-studied |
Suboptimal in dynamic environments; replanning latency ~100–300 ms |
Path planning for industrial AGVs in dynamic factory layouts; obstacle avoidance for delivery drones in urban airspace |
|
Deep RL (e.g., PPO, DQN) [30] |
Policy learning and control |
Learns from interaction; generalizes behavior |
Requires >1M environment steps for convergence; performance unstable under sparse rewards |
Robotic manipulation of irregular objects in warehouses; autonomous vehicle lane-merging on highways; adaptive NPC behavior in interactive robotics |
|
Kalman/Particle Filters |
Sensor fusion, localization |
Proven mathematical model; real-time capable |
Assumes linearity or Gaussian noise; tracking error increases >15% with >10% sensor dropouts |
Pose estimation for UAVs flying in GPS-denied tunnels; wearable motion tracking for assistive exoskeletons; indoor SLAM for mobile service robots |
|
Behavior Trees/FSMs |
Decision logic |
Easy to implement; modular |
Limited adaptivity; hard-coded logic can’t handle >10 concurrent context switches |
Task scheduling in home assistant robots (e.g., cleaning, fetching); patrol routines in security robots; game AI in robotic companions |
4.2 Hardware platform comparison
The hardware platforms powering these systems also vary significantly in sensor configuration, processing capacity, and mobility. Table 2 summarizes commonly used platforms across different domains.
Table 2. Hardware platform comparison
|
Platform |
Sensors |
Processing |
Mobility Type |
Primary Applications |
Notable Models |
|
Mobile Robots [30] |
LiDAR, cameras, IMU, GPS |
Jetson TX2/Xavier, Raspberry Pi, Intel NUC |
Wheeled or legged |
Indoor logistics, surveillance |
TurtleBot 4, Boston Dynamics Spot |
|
Manipulators |
Force/torque, encoders, cameras |
Onboard microcontrollers or external PC |
Fixed base or mobile |
Industrial automation, surgery |
UR5, Kinova Gen3, da Vinci system |
|
Aerial Robots [32] |
IMU, barometer, cameras |
PX4, Jetson Nano |
Multirotor |
Mapping, delivery, monitoring |
DJI M300 RTK, Parrot Anafi, Skydio |
|
Underwater Robots |
Sonar, depth sensors, DVL |
Embedded systems |
Propeller-based |
Inspection, marine biology |
BlueROV2, OceanOne, Iver3 |
|
Swarm Agents [18] |
Minimal (IR, RF, IMU) |
Lightweight MCU |
Wheeled, flying |
Research, collective tasks |
Kilobot, Crazyflie 2.1 |
4.3 Use case matrix
Table 3 outlines major use cases for autonomous systems, highlighting key technologies, performance criteria, and major deployment challenges.
Table 3. Use case matrix
|
Use Case |
Key Technologies |
Performance Metrics |
Challenges |
|
Autonomous Vehicles [43] |
CNNs, LiDAR, SLAM, V2X |
Precision, reaction time, safety rate |
Urban complexity, ethical decision-making |
|
Surgical Robotics [44, 45] |
Precision actuation, vision-guided manipulation |
Accuracy (sub-mm), safety, latency |
Regulatory approval, high costs |
|
Warehouse Automation |
Navigation, planning, barcode/RFID processing |
Throughput, uptime, adaptability |
Dynamic inventory layout |
|
Elderly Care Robots |
Voice interfaces, person recognition, safety systems |
Responsiveness, user trust, emotion detection |
Ethical concerns, personalization |
|
Agricultural Robotics [40] |
Multispectral vision, terrain navigation, AI analysis |
Yield boost, weed detection accuracy |
Outdoor variability, crop generalization |
4.4 Benchmarks and datasets
Table 4 compiles key benchmark datasets and performance metrics commonly used to evaluate SLAM, perception, and navigation systems.
Table 4. Common benchmark datasets
|
Dataset |
Focus Area |
Usage |
Remarks |
|
KITTI |
AV perception and SLAM |
Object detection, tracking, odometry |
Widely used; stereo and LiDAR data |
|
TUM RGB-D |
Visual SLAM, indoor nav |
Pose estimation, mapping |
Real-time indoor RGB-D sequences |
|
COCO / ImageNet |
General computer vision |
Object recognition, segmentation |
Rich, diverse annotations |
|
AirSim |
Aerial robotics, simulation |
Reinforcement learning, control |
Realistic physics; drone platform |
|
Robot@Home |
Service robots |
Semantic mapping, HRI |
Real-world domestic environments |
Performance Benchmarks
Despite impressive advances, autonomous robotics still faces significant challenges that hinder its widespread deployment and general-purpose applicability. These challenges span technical, environmental, and societal domains [46].
5.1 Scalability
Current autonomous systems frequently encounter scalability challenges when transitioning from controlled laboratory environments to complex, real-world applications. These issues become particularly evident in multi-agent coordination scenarios, such as swarm robotics, where maintaining synchronized behavior across numerous units is difficult. Similarly, distributed decision-making within large fleets of autonomous agents introduces significant complexity in terms of communication, consensus, and real-time responsiveness. As the range of tasks and operational environments diversifies, the overall system complexity escalates, making it harder to maintain robustness, efficiency, and generalizability across different contexts.
5.2 Real-time performance
Autonomous systems [6] demand low-latency decision-making capabilities for critical tasks such as collision avoidance, real-time manipulation, and seamless human interaction. However, achieving this remains challenging due to several technical barriers. One major obstacle is the high computational load associated with processing large volumes of sensor data and performing complex AI inference tasks. Additionally, bandwidth limitations and latency issues hinder effective offloading to edge or cloud platforms, especially in scenarios requiring rapid response. Meeting strict real-time performance guarantees becomes even more difficult in dynamic, unpredictable environments where delays or missed decisions can compromise safety and effectiveness.
5.3 Robustness in unstructured environments
Most robots remain brittle when operating outside of structured or pre-mapped environments. They face significant challenges such as handling perceptual noise, managing occlusions, and coping with sensor failures. Environmental variability including changes in weather, terrain, or the appearance of unexpected obstacles further complicates reliable operation. Additionally, a major hurdle lies in enabling robots to generalize learned policies and behaviours to unfamiliar domains or tasks, which limits their adaptability and robustness in real-world applications.
5.4 Ethical and legal issues in deployment
As autonomous systems increasingly operate in public, personal, and critical infrastructure spaces, ethical and legal considerations become central to their development and deployment. While ethical discourse has traditionally focused on abstract concepts such as fairness, accountability, and transparency, recent international efforts have produced more concrete guidelines and regulatory frameworks to guide ethical AI and autonomous system design.
One prominent example is the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, which outlines principles such as transparency, accountability, privacy, and algorithmic bias mitigation. These guidelines advocate for value-based design, emphasizing that systems should respect human rights, cultural norms, and environmental sustainability from the outset. For instance, IEEE recommends embedding explainability into autonomous decision-making, allowing stakeholders to understand and contest a system’s output especially critical in healthcare robotics or autonomous vehicles.
In a parallel effort, the European Union's AI Act (2021) classifies AI-based systems, including autonomous systems, into different risk categories (unacceptable, high-risk, limited-risk, and minimal-risk). High-risk systems, such as autonomous vehicles and biometric surveillance robots, are required to undergo rigorous conformity assessments, maintain human oversight mechanisms, and ensure traceability of decisions. This risk-based approach provides a pragmatic framework for aligning ethical and legal compliance with application contexts.
Additionally, case-based concerns are emerging in real-world deployments. For example, in autonomous vehicle accidents, establishing accountability remains complex should liability rest with the manufacturer, the algorithm designer, or the user? In social robotics, privacy concerns arise when domestic assistant systems continuously collect audio-visual data, often without explicit consent or adequate data protection. To ensure ethical deployment, autonomous systems must be developed with interdisciplinary input, combining insights from law, ethics, computer science, and human-computer interaction. A shift toward proactive governance, backed by enforceable standards and contextual testing, is necessary to foster public trust and responsible innovation.
5.5 Emerging role of generative AI in autonomous systems
A growing trend in autonomous robotics is the integration of Generative AI, particularly large language models (LLMs) such as GPT-4, into robotic perception, planning, and decision-making. These models, originally designed for natural language understanding and generation, are increasingly being adapted for task decomposition, semantic understanding, and human–robot interaction.
For instance, LLMs can convert high-level human commands (“Clean the lab and return to the charging station”) into a series of context-aware subtasks, enabling robots to reason about sequences, tools, and environmental constraints without explicit pre-programmed logic. Research prototypes from institutions like OpenAI, Google DeepMind, and Stanford have shown that LLMs can assist in zero-shot task planning, scene interpretation, and even natural-language-based policy learning for manipulation tasks.
In practical deployments, generative models are being used to improve multimodal interfaces, allowing robots to process and integrate spoken commands, visual cues, and sensor feedback simultaneously. This is particularly relevant for assistive robots, warehouse automation, and field robotics, where adaptability to unstructured instructions is critical.
Moreover, combining LLMs with robotics frameworks (e.g., ROS with LLM plug-ins or APIs) opens new possibilities for intention prediction, error recovery, and explainability, making autonomous systems more intuitive and human-centric. As these models become more efficient and hardware-friendly, their real-time use in embedded robotics systems is expected to increase.
Generative AI thus represents a paradigm shift in autonomy from rule-based planning to language-informed, reasoning-driven behavior synthesis, laying the groundwork for more general-purpose, conversationally operable autonomous systems.
Autonomous robotics [27] is rapidly evolving, and several emerging technologies promise to transform the field in the coming decades:
6.1 Integration of quantum computing, 6G, and edge AI
Quantum computing could dramatically accelerate computationally intensive tasks such as motion planning, Simultaneous Localization and Mapping (SLAM), and control by solving complex optimization problems far more efficiently than classical approaches. The advent of 6G networks is expected to revolutionize communication in autonomous systems by providing ultra-low-latency and high-throughput connectivity, which will enable real-time cloud-based robotic operations and seamless coordination among large-scale swarms. Additionally, the rise of Edge AI will facilitate more robust, on-device intelligence by processing data locally. This not only reduces reliance on potentially unstable cloud connections but also enhances system resilience and ensures greater privacy in sensitive applications.
Quantum computing could dramatically accelerate computationally intensive tasks such as motion planning, Simultaneous Localization and Mapping (SLAM), and control by solving complex optimization problems far more efficiently than classical approaches. Recent research highlights the potential of Quantum Approximate Optimization Algorithms (QAOA) in solving SLAM-related challenges, such as graph-based map merging and non-linear pose estimation. For example, Dalyac [47] demonstrated how QAOA can reduce the complexity of SLAM optimization by leveraging quantum superposition and entanglement to explore multiple hypotheses in parallel.
A real-world illustration of quantum application is Volkswagen’s quantum routing initiative, which used quantum algorithms to optimize taxi fleet movements in urban environments. This approach is directly relevant to autonomous multi-robot systems requiring dynamic task allocation and route optimization under resource constraints. Furthermore, exploratory work in quantum machine learning (QML) is emerging to improve sensor data interpretation in robotic vision and perception pipelines.
The advent of 6G networks is also expected to revolutionize communication in autonomous systems by offering ultra-low-latency and high-throughput connectivity. Industry whitepapers from [48] envision 6G as a foundational enabler for real-time collaborative robotics, especially in scenarios involving robot swarms, autonomous vehicles, and distributed decision-making. These networks will support massive machine-type communications (mMTC) and ultra-reliable low-latency communication (URLLC), which are essential for coordinating robots in dynamic, mission-critical environments.
Additionally, the rise of Edge AI will facilitate more robust, on-device intelligence by processing data locally. This not only reduces reliance on potentially unstable cloud connections but also enhances system resilience and ensures greater privacy in sensitive applications. Together, the convergence of quantum computing, 6G, and Edge AI is poised to unlock unprecedented capabilities in autonomy by addressing the key bottlenecks of computational efficiency, communication latency, and adaptive intelligence.
6.2 Next-generation systems
Future autonomous systems are anticipated to incorporate self-repairing and self-adaptive hardware, enabled by breakthroughs in soft robotics and modular architectures, allowing machines to recover from damage and adjust to changing environments. These systems will also feature lifelong learning agents capable of continuous adaptation without suffering from catastrophic forgetting, ensuring sustained performance in dynamic settings. A significant development will be the emergence of unified cognitive architectures that integrate symbolic reasoning with neural networks, combining the strengths of rule-based logic and data-driven learning. Furthermore, there will be a strong emphasis on human-centric design, prioritizing transparency, trust, and ethical compliance to facilitate safe and effective collaboration between humans and robots.
6.3 Convergence with other fields
Robotics will continue to advance through interdisciplinary collaboration, drawing heavily from neuroscience and cognitive science to develop more accurate models of perception, learning, and decision-making. Materials science will play a pivotal role in creating energy-efficient and resilient autonomous systems, enabling enhanced durability and sustainability. Additionally, research in human-computer interaction will contribute significantly to the development of intuitive interfaces and collaborative frameworks, improving how humans and robots communicate and work together in diverse environments.
Autonomous robotics is undergoing a paradigm shift driven by breakthroughs in AI, sensors, computation, and interdisciplinary integration. This review has examined the fundamental concepts, emerging trends, and critical challenges shaping the field. As robots increasingly permeate domains such as healthcare, transportation, agriculture, and defence, the importance of interdisciplinary research becomes ever more vital. Future progress will depend not only on technical innovation but also on careful consideration of societal implications, regulatory alignment, and ethical deployment. Through collaborative efforts spanning engineering, computer science, law, ethics, and human factors, we can build autonomous systems that are not only intelligent but also responsible, scalable, and trustworthy.
[1] Zhang, T., Li, Q., Zhang, C.S., Liang, H.W., Li, P., Wang, T.M., Wu, C. (2017). Current trends in the development of intelligent unmanned autonomous systems. Frontiers of Information Technology & Electronic Engineering, 18: 68-85. https://doi.org/10.1631/FITEE.1601650
[2] Liu, B. (2021). Recent advancements in autonomous robots and their technical analysis. Mathematical Problems in Engineering, 2021(1): 6634773. https://doi.org/10.1155/2021/6634773
[3] da Costa Barros, Í.R., Nascimento, T.P. (2021). Robotic mobile fulfillment systems: A survey on recent developments and research opportunities. Robotics and Autonomous Systems, 137: 103729. https://doi.org/10.1016/j.robot.2021.103729
[4] Tan, S.Y., Taeihagh, A. (2021). Governing the adoption of robotics and autonomous systems in long-term care in Singapore. Policy and Society, 40(2): 211-231. https://doi.org/10.1080/14494035.2020.1782627
[5] Hancock, P.A. (2020). Imposing limits on autonomous systems. In New Paradigms in Ergonomics, pp. 134-141.
[6] Jung, C., Finazzi, A., Seong, H., Lee, D., Lee, S., Kim, B., Shim, H. (2023). Autonomous system for head-to-head race: Design, implementation and analysis; team KAIST at the Indy autonomous challenge. Field Robotics, 3: 766-800. https://doi.org/10.55417/fr.2023024
[7] Dzedzickis, A., Subačiūtė-Žemaitienė, J., Šutinys, E., Samukaitė-Bubnienė, U., Bučinskas, V. (2021). Advanced applications of industrial robotics: New trends and possibilities. Applied Sciences, 12(1): 135. https://doi.org/10.3390/app12010135
[8] Borboni, A., Reddy, K.V.V., Elamvazuthi, I., AL-Quraishi, M.S., Natarajan, E., Azhar Ali, S.S. (2023). The expanding role of artificial intelligence in collaborative robots for industrial applications: A systematic review of recent works. Machines, 11(1): 111. https://doi.org/10.3390/machines11010111
[9] Attanasio, A., Scaglioni, B., De Momi, E., Fiorini, P., Valdastri, P. (2021). Autonomy in surgical robotics. Annual Review of Control, Robotics, and Autonomous Systems, 4(1): 651-679. https://doi.org/10.1146/annurev-control-062420-090543
[10] Reddy, K., Gharde, P., Tayade, H., Patil, M., Reddy, L.S., Surya, D. (2023). Advancements in robotic surgery: A comprehensive overview of current utilizations and upcoming frontiers. Cureus, 15(12): e50415. https://doi.org/10.7759/cureus.50415
[11] Ha, Q.P., Yen, L., Balaguer, C. (2019). Robotic autonomous systems for earthmoving in military applications. Automation in Construction, 107: 102934. https://doi.org/10.1016/j.autcon.2019.102934
[12] Sharma, V., Tripathi, A.K., Mittal, H. (2022). Technological revolutions in smart farming: Current trends, challenges & future directions. Computers and Electronics in Agriculture, 201: 107217. https://doi.org/10.1016/j.compag.2022.107217
[13] Saleem, M.H., Potgieter, J., Arif, K.M. (2021). Automation in agriculture by machine and deep learning techniques: A review of recent developments. Precision Agriculture, 22(6): 2053-2091. https://doi.org/10.1007/s11119-021-09806-x
[14] Duong, L.N., Al-Fadhli, M., Jagtap, S., Bader, F., Martindale, W., Swainson, M., Paoli, A. (2020). A review of robotics and autonomous systems in the food industry: From the supply chains perspective. Trends in Food Science & Technology, 106: 355-364. https://doi.org/10.1016/j.tifs.2020.10.028
[15] Srinivas, S., Ramachandiran, S., Rajendran, S. (2022). Autonomous robot-driven deliveries: A review of recent developments and future directions. Transportation Research Part E: Logistics and Transportation Review, 165: 102834. https://doi.org/10.1016/j.tre.2022.102834
[16] Mahdi, H., Akgun, S.A., Saleh, S., Dautenhahn, K. (2022). A survey on the design and evolution of social robots—Past, present and future. Robotics and Autonomous Systems, 156: 104193. https://doi.org/10.1016/j.robot.2022.104193
[17] Yasa, O., Toshimitsu, Y., Michelis, M.Y., Jones, L.S., Filippi, M., Buchner, T., Katzschmann, R.K. (2023). An overview of soft robotics. Annual Review of Control, Robotics, and Autonomous Systems, 6(1): 1-29. https://doi.org/10.1146/annurev-control-062322-100607
[18] Dias, P.G.F., Silva, M.C., Rocha Filho, G.P., Vargas, P.A., Cota, L.P., Pessin, G. (2021). Swarm robotics: A perspective on the latest reviewed concepts and applications. Sensors, 21(6): 2062. https://doi.org/10.3390/s21062062
[19] Soori, M., Arezoo, B., Dastres, R. (2023). Artificial intelligence, machine learning and deep learning in advanced robotics, a review. Cognitive Robotics, 3: 54-70. https://doi.org/10.1016/j.cogr.2023.04.001
[20] Winfield, A.F., Michael, K., Pitt, J., Evers, V. (2019). Machine ethics: The design and governance of ethical AI and autonomous systems [scanning the issue]. Proceedings of the IEEE, 107(3): 509-517. https://doi.org/10.1109/JPROC.2019.2900622
[21] Fisher, M., Mascardi, V., Rozier, K.Y., Schlingloff, B.H., Winikoff, M., Yorke-Smith, N. (2021). Towards a framework for certification of reliable autonomous systems. Autonomous Agents and Multi-Agent Systems, 35: 8. https://doi.org/10.1007/s10458-020-09487-2
[22] Mostafa, S.A., Ahmad, M.S., Mustapha, A. (2019). Adjustable autonomy: A systematic literature review. Artificial Intelligence Review, 51: 149-186. https://doi.org/10.1007/s10462-017-9560-8
[23] Nahavandi, S., Alizadehsani, R., Nahavandi, D., Mohamed, S., Mohajer, N., Rokonuzzaman, M., Hossain, I. (2025). A comprehensive review on autonomous navigation. ACM Computing Surveys, 57(9): 1-67. https://doi.org/10.1145/3727642
[24] Parekh, D., Poddar, N., Rajpurkar, A., Chahal, M., Kumar, N., Joshi, G.P., Cho, W. (2022). A review on autonomous vehicles: Progress, methods and challenges. Electronics, 11(14): 2162. https://doi.org/10.3390/electronics11142162
[25] Balestrieri, E., Daponte, P., De Vito, L., Lamonaca, F. (2021). Sensors and measurements for unmanned systems: An overview. Sensors, 21(4): 1518. https://doi.org/10.3390/s21041518
[26] Loganathan, A., Ahmad, N.S. (2023). A systematic review on recent advances in autonomous mobile robot navigation. Engineering Science and Technology, an International Journal, 40: 101343. https://doi.org/10.1016/j.jestch.2023.101343
[27] Licardo, J.T., Domjan, M., Orehovački, T. (2024). Intelligent robotics—A systematic review of emerging technologies and trends. Electronics, 13(3): 542. https://doi.org/10.3390/electronics13030542
[28] Singh, B., Kumar, R., Singh, V.P. (2022). Reinforcement learning in robotic applications: A comprehensive survey. Artificial Intelligence Review, 55(2): 945-990. https://doi.org/10.1007/s10462-021-09997-9
[29] Ravichandar, H., Polydoros, A.S., Chernova, S., Billard, A. (2020). Recent advances in robot learning from demonstration. Annual Review of Control, Robotics, and Autonomous Systems, 3(1): 297-330. https://doi.org/10.1146/annurev-control-100819-063206
[30] Rubio, F., Valero, F., Llopis-Albert, C. (2019). A review of mobile robots: Concepts, methods, theoretical framework, and applications. International Journal of Advanced Robotic Systems, 16(2): 1729881419839596. https://doi.org/10.1177/1729881419839596
[31] Raj, R., Kos, A. (2022). A comprehensive study of mobile robot: History, developments, applications, and future research perspectives. Applied Sciences, 12(14): 6951. https://doi.org/10.3390/app12146951
[32] Telli, K., Kraa, O., Himeur, Y., Ouamane, A., Boumehraz, M., Atalla, S., Mansoor, W. (2023). A comprehensive review of recent research trends on unmanned aerial vehicles (UAVs). Systems, 11(8): 400. https://doi.org/10.3390/systems11080400
[33] Ubina, N.A., Cheng, S.C. (2022). A review of unmanned system technologies with its application to aquaculture farm monitoring and management. Drones, 6(1): 12. https://doi.org/10.3390/drones6010012
[34] Pinskier, J., Howard, D. (2022). From bioinspiration to computer generation: Developments in autonomous soft robot design. Advanced Intelligent Systems, 4(1): 2100086. https://doi.org/10.1002/aisy.202100086
[35] Parmar, H., Khan, T., Tucci, F., Umer, R., Carlone, P. (2022). Advanced robotics and additive manufacturing of composites: Towards a new era in Industry 4.0. Materials and Manufacturing Processes, 37(5): 483-517. https://doi.org/10.1080/10426914.2020.1866195
[36] Matthews, G., Hancock, P.A., Lin, J., Panganiban, A.R., Reinerman-Jones, L.E., Szalma, J.L., Wohleber, R.W. (2021). Evolution and revolution: Personality research for the coming world of robots, artificial intelligence, and autonomous systems. Personality and Individual Differences, 169: 109969. https://doi.org/10.1016/j.paid.2020.109969
[37] Matthews, G., Lin, J., Panganiban, A.R., Long, M.D. (2019). Individual differences in trust in autonomous robots: Implications for transparency. IEEE Transactions on Human-Machine Systems, 50(3): 234-244. https://doi.org/10.1109/THMS.2019.2947592
[38] Boban, M., Giordani, M., Zorzi, M. (2022). Predictive quality of service: The next frontier for fully autonomous systems. IEEE Network, 35(6): 104-110. https://doi.org/10.1109/MNET.001.2100237
[39] Haidegger, T. (2019). Autonomy for surgical robots: Concepts and paradigms. IEEE Transactions on Medical Robotics and Bionics, 1(2): 65-76. https://doi.org/10.1109/TMRB.2019.2913282
[40] Vougioukas, S.G. (2019). Agricultural robotics. Annual Review of Control, Robotics, and Autonomous Systems, 2(1): 365-392. https://doi.org/10.1146/annurev-control-053018-023617
[41] Wong, C., Yang, E., Yan, X.T., Gu, D. (2018). Autonomous robots for harsh environments: A holistic overview of current solutions and ongoing challenges. Systems Science & Control Engineering, 6(1): 213-219. https://doi.org/10.1080/21642583.2018.1477634
[42] Cognominal, M., Patronymic, K., Wańkowicz, A. (2021). Evolving field of autonomous mobile robotics: Technological advances and applications. Fusion of Multidisciplinary Research, An International Journal, 2(2): 189-200.
[43] Sonko, S., Etukudoh, E.A., Ibekwe, K.I., Ilojianya, V.I., Daudu, C.D. (2024). A comprehensive review of embedded systems in autonomous vehicles: Trends, challenges, and future directions. World Journal of Advanced Research and Reviews, 21(1): 2009-2020. https://doi.org/10.30574/wjarr.2024.21.1.0258
[44] Thai, M.T., Phan, P.T., Hoang, T.T., Wong, S., Lovell, N.H., Do, T.N. (2020). Advanced intelligent systems for surgical robotics. Advanced Intelligent Systems, 2(8): 1900138. https://doi.org/10.1002/aisy.201900138
[45] von Haxthausen, F., Böttger, S., Wulff, D., Hagenah, J., García-Vázquez, V., Ipsen, S. (2021). Medical robotics for ultrasound imaging: Current systems and future trends. Current Robotics Reports, 2: 55-71. https://doi.org/10.1007/s43154-020-00037-y
[46] Rijwani, T., Kumari, S., Srinivas, R., Abhishek, K., Iyer, G., Vara, H., Gupta, M. (2025). Industry 5.0: A review of emerging trends and transformative technologies in the next industrial revolution. International Journal on Interactive Design and Manufacturing (IJIDeM), 19(2): 667-679. https://doi.org/10.1007/s12008-024-01943-7
[47] Dalyac, C. (2023). Quantum many-body dynamics for combinatorial optimisation and machine learning. Doctoral dissertation, Sorbonne Université.
[48] Calandra, D., Pratticò, F.G., Cannavò, A., Casetti, C., Lamberti, F. (2024). Digital twin-and extended reality-based telepresence for collaborative robot programming in the 6G perspective. Digital Communications and Networks, 10(2): 315-327. https://doi.org/10.1016/j.dcan.2022.10.007