In the race to outpace adversaries and modernize platforms, the Aerospace and Defense sector is under immense pressure to ship software-defined systems faster than ever before. But when a single line of code can mean the difference between mission success and catastrophic failure, how do we balance velocity with absolute reliability?
Generative AI and large language models (LLMs) promise to revolutionize development cycles, generating code directly from requirements at unprecedented speeds. Yet, for mission-critical and safety-critical systems, where DO-178C compliance and cybersecurity are nonnegotiable, the risk of AI-generated defects is a red flag that cannot be ignored.
Blind trust in AI is not an option. However, outright rejection of these tools could mean losing the technological edge.
Join us for a critical discussion on how to bridge the gap between innovation and airworthiness. We’ll move beyond the hype to explore a deterministic safety net for AI-generated code.
In this session, we’ll cover:
• The trust gap: Analyzing the specific risks of using LLMs for safety-critical and security-critical applications in defense and aerospace.
• The compliance solution: How combining LLMs with deterministic tools, such as static analysis, can provide the rigor required to validate generated code against industry standards.
• The science: A review of the latest scientific research on LLM-based critical code generation and what it means for the future of defense software engineering.