AI in Military and Defense: It's a Matter of Trust

AI in Military and Defense: It's a Matter of Trust

AI tools are necessary to maintain advantage and keep the nation safe, but they must live up to our ethical principles

Calendar icon 04-03-2023
Profile photo SAIC
Category icon DEFENSE

Key Takeaways:


  • The Department of Defense's vision of trustworthy artificial intelligence is through an approach that is responsible, equitable, traceable, reliable and governable.
  • Responsible AI in the military is achievable through transparent, explainable and auditable development processes by industry partners including SAIC.
  • The DOD and industry partners have the obligation to carry out ethical implementation and operationalization of responsible AI solutions to prevent unintended consequences.

LISTEN TO THIS BLOG:

 

 

The meteoric rise of OpenAI’s ChatGPT and rival artificial intelligence chatbots like Google’s Bard underscores how AI is increasingly permeating our everyday lives. But, as recent headlines have shown, along with the new AI tools has come a slew of ethical situations surrounding their uses and the question of whether they can even be trusted. One thing is clear: AI is here to stay, and everything from search engines to the tech we depend on every day to the systems and weapons for defending our nation will run on algorithmic software.

AI investment by both private industries and governments is growing exponentially. High-end nations and adversaries are developing next-generation autonomous defense systems and weapons. For SAIC’s customers in the Department of Defense, AI is a necessity in maintaining military advantage, and questions around ethics and trust are crystallizing quickly. In developing and using AI solutions, defense customers must adhere to five ethical principles of AI being responsible, equitable, traceable, reliable and governable, which the department has set as policy.

In 2021, Secretary of Defense Lloyd Austin was quoted, “We have a principled approach to AI that anchors everything this department does; we call this responsible AI,” adding, “that is the only kind of AI we do.” According to the department, responsible AI, or RAI, is a journey to trust. It is the disciplined approach for how the armed services and DOD agencies must conduct AI design, development, deployment and use in order to prevent unintended consequences.

While the DOD, SAIC and our customers are addressing the importance of AI — more specifically RAI — and how we collaboratively solve ethical issues at home, this challenge is a global concern. In mid-February, we attended REAIM 2023 in the Netherlands, the first global summit on RAI in the military domain. Held at the World Forum in The Hague, it provided a venue for foreign ministers, government delegates, military officials, industry members, think tanks and others from the U.S. and allies to tackle the current state of AI technologies and their opportunities, challenges and risks in an open, dialogue-driven manner. 

Defining responsible AI

Since this was the inaugural conference, discussions naturally landed all over the map. Most speakers and panelists, however, raised essential and thought-provoking questions, such as what constitutes the ethical threshold of AI technologies where those that exceed it must be outlawed. As we navigate the expanding paradigm of unmanned tools and weapons, which have the potential to operate with minimal human intervention, the conference represented the first concerted effort on the global stage to consider the consequential aspects of AI solutions being developed.

There is a global consensus and urgency around establishing fundamental understanding of and commitment to ethical AI. SAIC and Koverse will not only align to the DOD's principled approach, but also provide leadership for it, in order to effectively serve our customers and allies. At the same time we are supporting national defense missions with AI solutions that meet the speed of needs, we must provide tools that enable accountability and transparency to help our customers achieve their outcomes in a lawful and ethical manner.

Industry has an obligation to carry out an ethical implementation and operationalization of responsible AI that is core to the values of our customers, the Department of Defense and the nation.

How can industry partners be leaders in military ethics and AI safety and earn the trust of our warfighters, civilian personnel and citizens? We must concentrate not only on the performance of our AI solutions but also on our development activities, marrying them with the DOD’s ethical principles and strategic frameworks for implementation. We first have to earn the trust of the DOD itself.

Focusing on transparent processes

As defense organizations build out their RAI governance structures and processes for oversight and accountability, industry leaders can build trust with them by making design, development and testing activities available for auditing and provenance checking. With activity-monitoring and documentation tools to help understand the thinking and behaviors of AI solutions every development step of the way, and by preventing and teaching AI from making potentially harmful decisions, their risks can be better managed.

No AI solution will be operationalized by the DOD without explainability, which is allowing stakeholders to audit the chain of events from the gathering of data to how the AI was trained and tested to the final generated outputs. Trust is about allowing customers to drill down into any part of an AI solution if they wish to know more about what they’re working with.

This ultimately means that warfighters must be able to provide user feedback during testing and verification of solutions and, critically, during operations in the field, in order to build confidence in AI use. We must keep adapting and modifying solutions upon evidence-based improvements and quantifiable scoring, knowing that no AI solution is static and final.

Industry has an obligation to carry out an ethical implementation and operationalization of responsible AI that is core to the values of our customers, the DOD and the nation. And, we must act as an antibody in counteracting irresponsible and non-ethical AI work in order to drive scalable and accepted success. It behooves us to have responsible technologies at the forefront of adoption.

Developing a steady dialogue

It is imperative to continue conversations and formal discussions around RAI like those at REAIM. While we enjoyed all the talks and found value in each one, perhaps conferences that lean toward military personnel with hands-on experiences thus far with AI technologies and putting the focus on those working in operational theaters would help industry to direct efforts and investments. The use of AI and its implications have transcended theoretical considerations, and ethical responsibility needs to be firmly rooted in the realities that warfighters and defense leaders face every day.

Attending REAIM reinforced our understanding that the government and our industry are at the forefront of AI efforts. It is clear that all of us share the weight of examining our collective ethics posture when developing the tools that our military and allies will depend on for future success. With that in mind, SAIC and Koverse are positioned to do our part in ensuring that the ethical considerations of our military and allies are embedded in AI development, and REAIM was a great opportunity for government, industry and academia to work together and learn from each other in shaping this future.

 

 

< Return to Blogs