S6E32: Resilient Cyber w/ Helen Oakley

Exploring the AI Supply Chain

In this episode of Resilient Cyber, host Chris Hughes sits down with Helen Oakley, Founding Partner of the AI Integrity & Safe Use Foundation (AISUF), to discuss the evolving landscape of AI supply chain security, AI Bills of Material (AIBOM), and the broader challenges of AI governance and transparency.

Key Highlights:

  • Helen's background: Helen serves as the Director of Software Supply Chain Security and Secure Development at SAP. She leads efforts in safeguarding CI/CD pipelines, implementing software transparency, and managing AI supply chain security. Helen is also the co-leader of the AIBOM Tiger Team under CISA and a Founding Partner of AISUF.

  • Understanding AI transparency: Helen discusses the need to start by identifying the type of AI models being adopted and the associated risks. Transparency, including the ethics and impact of models, is critical when implementing AI in organizations.

  • Differences between traditional software supply chain and AI supply chain: Helen explains how AI introduces new challenges compared to traditional software. Unlike traditional software, AI models continuously evolve during runtime, making it necessary to monitor changes and assess their risks in real time.

  • Open-source vs. proprietary AI models: Helen highlights the complexities of using both open-source and commercial AI models. Open-source platforms like Hugging Face often present challenges, such as vulnerabilities and poisoned datasets, which organizations must address before adoption.

  • AI Bill of Materials (AIBOM): AIBOM is a subset of the Software Bill of Materials (SBOM), focusing on AI-specific elements such as datasets, model training, and even energy consumption. This transparency is essential for managing AI risks and regulatory compliance.

  • AISUF’s work: The AI Integrity and Safe Use Foundation aims to develop frameworks that help organizations identify risks, implement safety standards, and mature their AI practices. The framework includes grades and standards for general-purpose AI and critical infrastructure AI.

  • Regulatory landscape: Helen shares insights into evolving regulations such as the EU AI Act and SB 1047 in California. These regulations are creating stricter requirements around AI accountability, safety, and supply chain risks, prompting organizations to implement transparency and governance practices.