AI Security

The Leadership Guide to Securing AI

Artificial intelligence (AI) is critical to the future success and health of companies across industries.

To empower emerging and current AI security leaders, Global Resilience Federation (GRF) convened an experienced working group and asked KPMG to facilitate in-depth meetings and interviews with AI and security practitioners from more than 20 leading companies, think tanks, academic institutions, and industry organizations.

We believe that the output—The Leadership Guide to Securing AI—will be of great value to the GRF network and to other organizations seeking to explore this groundbreaking technology.

Practitioners’ Guide to Managing AI Security

The race to integrate AI into internal operations, and bring AI-based products and services to market, is moving faster than almost anyone could have imagined. Some security leaders have expressed concern that in the excitement over AI’s potential, critical security and assurance considerations are being overlooked. 

Recognizing the disconnect between AI innovation and AI security, Global Resilience Federation convened a working group and asked KPMG to facilitate in-depth discussions among AI and security practitioners from more than 20 leading companies, think tanks, academic institutions, and industry organizations.

The output of this working group is the Practitioners’ Guide to Managing AI Security. The guide aims to provide insights and considerations that strengthen collaboration between data scientists and AI security teams across five tactical areas identified by the working group: Securing AI, Risk & Compliance, Policy & Governance, AI Bill of Materials, and Trust & Ethics.