- DatesJune 2022 to June 2026
- SponsorÂãÁÄÖ±²¥ with tuition funded by the Royal Navy
Full autonomy may be achieved by integrating Artificial Intelligence, Machine Learning, and Data Science. With the rapid development of autonomous systems, there's a growing need for assurance and certification processes to ensure safe deployment.
Currently, the UK MoD ensures safety through civil health and safety regulations, government legislation, and a unique duty holder system - established from the 2009 Haddon Cave report. This research evaluates whether the duty holder construct is suited to AI-based technology. It combines a literature review and interviews with stakeholders from the MoD, industry, and academia, comparing safety assurance methods across various domains to develop a new concept: "safe to operate itself safely" for military weapon systems containing AI.
Progress update
Initiated in 2022, this doctoral project commenced with a comprehensive literature analysis across agricultural, automotive, industrial, military, engineering, and space sectors to evaluate assurance techniques and the varied different uses of AI. A knowledge gap emerged where human performance assessment and machine certification overlaps when control is transferred to AI. In this space no policy exists. This led to the development of the "Safe to Operate Itself Safely" framework. The framework was refined through insights gained from 20 high-level interviews and has been presented to wide audiences at a number of conferences sparking fierce debate. It is currently undergoing testing in workshops before final refinement and publication.
Further information
Early iterations of this research contributed to the House of Lords Select Committee's research towards and has been peer reviewed in journals published by and , and won the Patron’s Award when peer reviewed, presented and defended at the conference.