The emerging field of Explainable Artificial Intelligence (XAI) is focused on helping decision makers to understand and trust underlying machine learning logic and algorithms. Inherent in many AI algorithms is a tradeoff between explainability and potential predictive power. While less complex techniques such as Bayesian and decision networks may be explainable and transparent, more powerful and complex techniques, such as neural networks and deep learning, are less so. This paper will explore explainability in artificial intelligence and machine learning, and will bridge these challenges to those that the cost estimating community has encountered historically and will increasing encounter as we also adopt AI techniques.

The future of XAI is relevant to this community in at least 3 profound ways; 1) Defense systems that we support and are based on AI will literally make life or death decisions, 2) our own analytical models will progressively incorporate more AI, and 3) just like our current methods and techniques, decision makers will not trust our future analyses if we can’t adequately explain them. XAI must address questions such as: Why did it arrive at a specific prediction or decision? When can we trust its answer? When will the system be successful or fail?
  • 1596206498-15d8afe9d2e65059
    Jon Kilgore
    Mr. Jon Kilgore leads PRICE Systems government solutions and has 20 years of experience in Cost Analysis, Project Management and Program Planning and Control. Jon is responsible for all government customers in the United States and Canada, and during his career he has supported all the DoD Services, NASA, several intelligence agencies, DHS, and multiple civil agencies. His emphasis has been IT, software-intensive, and space programs. Jon earned a MS in Information and Telecommunication Systems from Johns Hopkins University and a BS in Commerce from the University of Virginia McIntire School of Commerce.