SATURN 2020 has ended
Back To Schedule
Tuesday, May 12 • 10:30am - 11:15am
A Risk-Based Approach for Assuring the Trustworthiness of AI Embedded in Defense Systems

Sign up or log in to save this to your schedule, view media, leave feedback and see who's attending!

Feedback form is now closed.
Prompted by the successful application of Artificial Intelligence (AI) and specifically Machine Learning (ML) in the commercial sector, the Department of Defense (DoD) seeks to leverage these technologies in military systems. AI offers the potential to solve a wide variety of DoD problems, increasing levels of autonomy and offering improved human-machine collaboration for our warfighters. However, the trustworthiness we place in such systems to function safely and ethically is a critical concern. The “black box” nature of ML combined with well-known sensitivities to the data sets used to develop (train) ML models raises many legitimate questions related to this trustworthiness question. The recent 2019 update to the National AI Research and Development Plan continues to acknowledge this gap.

We present a risk-based approach that considers the full system context and the full system life cycle, from initial concept through sustainment. While investment in research programs, such as DARPA’s Explainable AI (XAI) and Guaranteeing AI Robustness against Deception (GARD), is critical to achieving our long-term trustworthiness of AI goals, we believe it is also worth looking at the problem from a broader perspective. Our approach acknowledges (and leverages) the fact that AI algorithms are but one part of a complete system, and that decisions made as early as CONOPS and architecture development affect the system-level trustworthiness. Our approach focuses attention on the risks associated with AI algorithms embedded in a system producing incorrect results (e.g., classifications, decisions). To mitigate those risks, we use a collection of techniques applied to the system-level CONOPS and architecture, to validate and verify the AI algorithms and models embedded within the system, and to support a comprehensive test hierarchy supported with an automated DevOps pipeline. Our approach also leverages a collection of advanced techniques to mitigate the most significant AI trustworthiness risks.

avatar for Rick LaRowe

Rick LaRowe

Raytheon Integrated Defense Systems
Dr. Richard LaRowe is a Principal Engineering Fellow and has been with Raytheon Integrated Defense Systems (IDS) for 17 years. He is currently the IDS Software Engineering Technical Director and leads the IDS strategy and roadmaps for applying Artificial Intelligence and Machine Learning... Read More →

Tuesday May 12, 2020 10:30am - 11:15am EDT
Salon 11/12 Rosen Plaza Hotel

Attendees (5)