Dependable Artificial Intelligence in Defence

Defence AI – Strategic Imperative

Artificial Intelligence (AI) is at the forefront of technological transformation across industries, with its potential to revolutionise operations, improve efficiency, and deliver unprecedented capabilities. Nowhere is this potential more critical than in the realm of national defence. Recognising the profound implications of AI, the Ministry of Defence (MOD) has introduced Joint Service Publication (JSP) 936, a directive designed to ensure that AI technologies are adopted and integrated in a way that is ethical, safe, and effective. This publication sets out a structured approach to using AI, balancing its ambitious deployment with the need for robust governance, ethical assurance, and operational readiness.

Objectives and Scope of JSP 936

The JSP 936 directive underlines the MOD’s commitment to adopting AI technologies that align with the UK’s democratic values while maintaining operational effectiveness. Its principal goal is to provide MOD teams with a clear framework for developing and deploying AI-enabled systems, ensuring compliance with ethical, legal, and safety standards. Central to this framework are the MOD’s AI Ethical Principles, which serve as the foundation for safe and responsible AI adoption. These principles include human-centricity, accountability, understanding, bias mitigation, and reliability.

The directive encompasses a wide range of applications, from robotic and autonomous systems (RAS) to digital tools supporting logistics and decision-making. However, it also sets boundaries to maintain focus on critical and ethically sensitive areas. For instance, everyday commercial tools such as predictive text on messaging platforms are excluded from its scope. This clarity ensures resources are directed towards systems that require careful oversight.

Embedding Ethical Principles

The integration of AI in defence systems raises complex ethical questions. To address these, JSP 936 embeds ethical considerations at every stage of AI development and deployment. The directive emphasises the importance of maintaining meaningful human control over AI-enabled systems, ensuring accountability for their outcomes. This commitment is reflected in the introduction of the Responsible AI Senior Officer (RAISO), a role dedicated to overseeing the ethical governance of AI within MOD organisations.

The MOD’s ethical principles provide a robust framework for this governance:

Human-centricity

AI systems must prioritise human welfare, considering both the positive and negative impacts on all stakeholders, including operators, civilians, and adversaries.

Responsibility

Clear accountability mechanisms must ensure that human operators retain ultimate control over AI outcomes, supported by transparent governance structures.

Understanding

AI systems must be explainable and transparent, enabling stakeholders to make informed decisions based on their outputs.

Bias Mitigation

Developers must proactively identify and address biases to prevent unintended harm or discrimination, ensuring fairness and inclusivity.

Reliability

AI systems must demonstrate consistent performance, robustness, and security under defined conditions.

These principles ensure that AI technologies are not only technically sound but also socially and ethically aligned, fostering trust among users and stakeholders.

Operational Applications and Challenges

AI’s transformative potential in defence is vast. It can be applied across a spectrum of functions, from enhancing reconnaissance and intelligence gathering to streamlining logistics and decision-making. The directive highlights several key applications, such as reinforcement learning algorithms for command and control operations, large language models for administrative support, and object detection systems for intelligence and surveillance.

However, the deployment of AI also presents unique challenges. The unpredictability and opacity of AI systems can complicate their integration into defence operations. For example, robotic and autonomous systems require rigorous testing and validation to ensure they operate reliably in diverse and complex environments. Similarly, digital systems must be transparent in their outputs to prevent errors or biases from influencing critical decisions.

JSP 936 acknowledges these challenges, calling for a balanced approach that prioritises innovation while managing risks. This includes maintaining clear operational boundaries for AI systems, ensuring that they function predictably within their intended contexts.

AI Lifecycles and Risk Management

One of the central tenets of JSP 936 is the adoption of a lifecycle approach to AI development and deployment. This approach aligns with established software development practices such as DevOps and MLOps, which emphasise continuous validation, improvement, and integration. By adopting these methodologies, MOD teams can ensure that AI systems remain effective and reliable throughout their operational lifespans.

Risk management is a cornerstone of this lifecycle approach. All AI projects must undergo rigorous ethical risk assessments, with risk levels determining the necessary oversight and approval processes. High-risk applications, such as those involving kinetic effects, require consultation with the Defence AI and Autonomy Unit (DAU) to ensure accountability at the highest levels. This layered approach ensures that risks are identified, managed, and mitigated effectively.

Moreover, JSP 936 highlights the importance of adaptability in risk management. As AI technologies evolve, so too do the risks associated with their use. The directive calls for continuous monitoring and reassessment to address emerging risks and ensure that AI systems remain safe and effective.

Fostering International Collaboration

In an increasingly interconnected world, collaboration with international partners is essential for leveraging AI’s potential in defence. JSP 936 emphasises the importance of fostering trust and interoperability among allies, particularly within NATO. The directive aligns with NATO’s Principles of Responsible Use for AI in Defence, promoting shared ethical standards and technical interoperability.

This commitment to international collaboration enhances collective security and ensures that UK-developed AI technologies adhere to global norms. The MOD’s active participation in initiatives such as NATO’s Data and Artificial Intelligence Review Board further demonstrates its dedication to promoting responsible AI use on the global stage.

Training and Cultural Adaptation

To realise the full potential of AI, MOD personnel must be equipped with the skills and knowledge necessary to develop, deploy, and operate these systems effectively. JSP 936 underscores the importance of training programmes tailored to the unique challenges of AI technologies. These programmes must address not only technical proficiency but also an understanding of AI systems’ behaviour, limitations, and ethical implications.

Collaborative training initiatives, where human operators and AI systems learn from one another, are particularly emphasised. Such initiatives optimise human-machine teaming, ensuring that both entities can work together seamlessly. This approach is critical for maintaining trust in AI systems and ensuring their effective integration into defence operations.

In addition to technical training, the directive calls for a cultural shift within the MOD. This includes fostering an environment where ethical concerns can be openly discussed and addressed, encouraging transparency and inclusivity. By embedding ethical awareness into organisational culture, JSP 936 ensures that AI adoption is guided by shared values and principles.

Governance and Accountability

Strong governance is essential for ensuring the safe and responsible use of AI in defence. JSP 936 establishes clear governance structures, defining roles and responsibilities at every level of AI development and deployment. The RAISO plays a central role in this framework, overseeing the ethical assurance of AI projects and ensuring compliance with MOD policies.

The directive also highlights the importance of transparency in governance. This includes clear communication of the ethical considerations underlying AI systems, as well as documentation of risk management decisions. By maintaining transparency, JSP 936 builds trust among stakeholders and reinforces the MOD’s commitment to ethical AI use.

The Future of AI in Defence

JSP 936 represents a significant milestone in the MOD’s approach to AI integration. By embedding ethical principles, promoting rigorous governance, and fostering international collaboration, the directive provides a comprehensive framework for deploying AI responsibly. However, its success will depend on the MOD’s ability to adapt to the evolving landscape of AI technologies.

As AI continues to advance, new opportunities and challenges will emerge. The MOD must remain vigilant, ensuring that its policies and practices keep pace with technological developments. This includes ongoing investment in research and innovation, as well as continuous engagement with stakeholders to address emerging ethical and operational considerations.

Conclusion

The introduction of JSP 936 marks a proactive step towards harnessing the potential of AI in defence while safeguarding against its risks. By prioritising ethical considerations, fostering international collaboration, and investing in training and governance, the MOD is laying the foundation for a future where AI enhances national security and operational effectiveness. As the defence sector embraces this transformative technology, the principles and practices outlined in JSP 936 will serve as a vital guide for navigating the complexities of AI adoption.

More on our AI Services.