AI Security Summit — October 15, 2026

AI Security Summit — October 15, 2026

The flagship. One day. Two tracks. No filler.

Leadership Track

This track is about the hard calls: setting AI risk budgets, choosing models and vendors, and deciding what your org will defend vs ignore.

Practitioner Track

This isn’t high-level fluff. It’s about building, breaking, and defending real AI systems.

Build. Broken. Fixed

Live model exploitation demos, architecture teardowns from teams shipping AI at scale, and the workshops where you actually break things (then fix them). LLM security, AI red teaming, and secure-by-default development — from the people doing it daily.

Event Details

Date

October 15, 2026

Venue

The Westin St. Francis, San Francisco on Union Square

Two Dedicated Tracks

Leadership Track

For Strategic Decision-Makers

An intentionally small room. CISOs, VPs of Security, and Heads of AI from the companies actually deploying these systems, working through the strategic problems together: org design for AI security, board-level risk communication, and how to staff teams for threats that didn't exist two years ago. The size is the point — this isn't a keynote audience, it's a working session.

Who Should Attend:

  • Leaders driving AI strategy and risk decisions
  • C-suite execs (CISOs, CTOs, CIOs, etc.) with AI responsibility
  • VPs/Heads of Engineering, AI, Security, Platform, or Infrastructure

Topics we’ll cover:

  • What AI risk actually looks like in board meetings — metrics, frameworks, and real questions directors ask
  • Practical governance patterns — model approval gates, ownership maps, policy checklists
  • Concrete supply chain risk: what to verify in third-party models and where past breaches started
  • Where security slows teams vs where it stops costly outages or bad data leaks
  • Case reviews of real AI security incidents — attack vectors, fixes applied, and measurable outcomes

This is for people who must justify decisions with evidence — not slogans — and run AI programs at enterprise scale.

Practitioner Track

For Hands-On Experts

Live model exploitation demos, architecture teardowns from teams shipping AI at scale, and the workshops where you actually break things (then fix them). LLM security, AI red teaming, and secure-by-default development — from the people doing it daily.

Who Should Attend:

  • Engineers building AI features and infrastructure
  • AppSec teams testing those systems
  • AI/ML engineers and researchers tuning and deploying models
  • Platform/architecture engineers owning pipelines and endpoints
  • AI security practitioners responsible for detection and response

Topics we’ll cover:

  • Threat modeling for AI/ML with concrete attack patterns and failure cases
  • Securing models, training data, pipelines, and inference endpoints — with config examples and code references
  • Live red team demos — prompt injection exploits, misuse chains, and model abuse patterns
  • LLM security hardening: prompt safety, hallucination controls, and abuse mitigation code snippets
  • Detection and monitoring playbooks — what logs matter, thresholds to watch, and incident response steps that have worked in real environments

Expect actual examples, configs, test cases, and clear steps you can use this week — not marketing language.

Agenda

Coming soon!

We don't have our agenda available yet. Check back soon!

Some of our 2025 speakers

People who've done the work, sharing what they’ve learned.

We don't book speakers for name recognition. We book them because they've published the paper, built the tool, run the red team, or shipped the fix. Our CFP is open, and our bar is high — if your talk doesn't include original work, it probably isn't a fit.

Shawn 'Swyx'

Shawn 'Swyx'

Curator & Author

AI.Engineer

Sarah Guo

Sarah Guo

Founder

Conviction

Manoj Nair

Manoj Nair

Chief Innovation Officer

Snyk

Rama Akkiraju

Rama Akkiraju

Vice President of AI for IT

NVIDIA

Jared Hanson

Jared Hanson

Co-Founder

Keycard

Matthew Creager

Matthew Creager

Co-Founder

Keycard

Anu Bharadwaj

Anu Bharadwaj

President

Atlassian

Chenxi Wang

Chenxi Wang

Founder and General Partner

Rain Capital

Matan Grinberg

Matan Grinberg

Founder & CEO

Factory

Aron Eidelman

Aron Eidelman

Security Advocate

Google

Denise Kwan

Denise Kwan

Developer Advocate

Google

Vandana Verma

Vandana Verma

Security Advocacy Leader

Snyk

Registration & Early Interest

This is a paid event with limited capacity.

Registration opens soon! Register interest now to:

  • Save your seat for a discounted rate
  • Be the first to hear when tickets go live
  • Receive early updates on speakers and agenda

© 2026 AI Security Summit. All rights reserved.