Jump to content
Connected Digital Things
Osama Zaki

Several Terms such as Digital Ecosystem, Digital Life, Digital World, Digital Earth have been used to describe the growth in technology. Digital twins are contributing to this progress, and it will play a major role in the coming decades. More digital creatures will be added to our environments to ease our life and to reduce harms and dangerous. But can we trust those things? Please join the Gemini call on the 29th of March; Reliability ontology was developed to model hardware faults, software errors, autonomy/operation mistakes, and inaccuracy in control. These different types of problems are mapped into different failure modes. The purpose of the reliability ontology is to predict, detect, and diagnose problems, then make  recommendations or give some explanations to the human-in-the-loop. I will discuss about these topics and will describe how ontology and digital twins are used as a tool to increase the trust in robots. 

Trust in the reliability and resilience of autonomous systems is paramount to their continued growth, as well as their safe and effective utilisation.    A recent global review into aviation regulation for BVLOS (Beyond Visual Line of Sight) with UAVs (Unmanned Aerial Vehicles) by the United States Congressional Research Office, highlighted that run-time safety and reliability is a key obstacle in BVLOS missions in all of the twelve European Union countries reviewed . A more recent study also highlighted that within a survey of 1500 commercial UAV operators better solutions towards reliability and certification remain a priority within unmanned aerial systems. Within the aviation and automotive markets there has been significant investment in diagnostics and prognostics for intelligent health management to support improvements in safety and enabling capability for autonomous functions e.g. autopilots, engine health management etc.

The safety record in aviation has significantly improved over the last two decades thanks to advancements in the health management of these critical systems.     In comparison, although the automotive sector has decades of data from design, road testing and commercial usage of their products they still have not addressed significant safety concerns after an investment of over $100 Billion in autonomous vehicle research.  Autonomous robotics face similar, and also distinct, challenges to these sectors. For example, there is a significant market for deploying robots into harsh and dynamic environments e.g. subsea, nuclear, space etc which present significant risks along with the added complexity of more typical commercial and operational constraints in terms of cost, power, communication etc which also apply. In comparison, traditional commercial electronic products in the EEA (European Economic Area) have a CE marking, Conformité Européenne, a certification mark that indicates conformity with health, safety, and environmental protection standards for products sold within the EEA. At present, there is no similar means of certification for autonomous systems.    

Due to this need, standards are being created to support the future requirements of verification and validation of robotic systems. For example, the BSI standards committee on Robots and Robotic Devices and IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems (including P7009 standard) are being developed to support safety and trust in robotic systems. However, autonomous systems require a new form of certification due to their independent operation in dynamic environments. This is vital to ensure successful and safe interactions with people, infrastructure and other systems. In a perfect world, industrial robotics would be all-knowing.  With sensors, communication systems and computing power the robot could predict every hazard and avoid all risks. However, until a wholly omniscient autonomous platform is a reality, there will be one burning question for autonomous system developers, regulators and the public - How safe is safe enough? Certification infers that a product or system complies with legal relevant regulations which might slightly differ in nature from technical or scientific testing. The former would involve external review, typically carried out by some regulators to provide guidance on the proving of compliance, while the latter usually refers to the reliability of the system. Once a system is certified, it does not guarantee it is safe – it just guarantees that, legally, it can be considered “safe enough” and that the risk is considered acceptable.

There are many standards that might be deemed relevant by regulators for robotics systems. From general safety standards, such as ISO 61508, through domain specific standards such as ISO 10218 (industrial robots), ISO 15066 (collaborative robots), or RTCA DO-178B/C (aerospace), and even ethical aspects (BS8611).  However, none of those standards address autonomy, particularly full autonomy wherein systems take crucial, often safety critical, decisions on their own. Therefore, based on the aforementioned challenges and state of the art, there is a clear need for advanced data analysis methods and a system level approach that enables self-certification for systems that are autonomous, semi or fully, and encompasses their advanced software and hardware components, and interactions with the surrounding environment.     In the context of certification, there is a technical and regulator need to be able to verify the run-time safety and certification of autonomous systems. To achieve this in dynamic real-time operations we propose an approach utilising a novel modelling paradigm to support run-time diagnosis and prognosis of autonomous systems based on a powerful representational formalism that is extendible to include more semantics to model different components, infrastructure and environmental parameters.

To evaluate the performance of this approach and the new modelling paradigm we integrated our system with the Robotics Operating System (ROS) running on Husky (a robot platform from Clearpath) and other ROS components such as SLAM (Simultaneous Localization and Mapping) and ROSPlan-PDDL (ROS Planning Domain Definition Language). The system was then demonstrated within an industry informed confined space mission for an offshore substation. In addition, a digital twin was utilized to communicate with the system and to analysis the system’s outcome.

  • Like 1

User Feedback

Recommended Comments

There are no comments to display.

  • Create New...