Breaking GenAI: Offensive Security for Modern AI System

Training Overview

Breaking GenAI: Offensive Security for Modern AI Systems is an advanced, hands-on training at CSA XCON 2026, Dehradun, focused on red teaming modern AI and GenAI systems used in real-world, high-risk environments.

As AI systems rapidly move from experimentation to production, red teaming AI is no longer optional. Global regulations, enterprise governance requirements, and real-world incidents now mandate systematic testing of foundational models and AI-powered applications before deployment. This training equips security professionals with the skills to proactively identify security, safety, and responsible AI failures across the AI lifecycle.

Participants will learn how to assess AI systems holistically by combining traditional security testing, adversarial machine learning, and Responsible AI red teaming techniques.

About the Training at CSA XCON 2026

This training is built on real-world red team methodologies used to assess production-grade AI systems, including large language models and multimodal AI applications.

Key highlights include:

The course strongly aligns with CSA XCON’s focus on future-ready cybersecurity, addressing one of the most critical and fast-evolving threat landscapes

What You Will Learn

Participants will gain hands-on experience with:

Training Experience & Expectations

Participants will leave with a structured approach to breaking and securing AI systems in enterprise environments.

Regulatory & Mitigation Focus

In the advanced modules, participants will:

This ensures learnings are immediately applicable to real-world governance, risk, and compliance requirements.

Who Should Attend

This training is ideal for:

Skill Level

Intermediate (No prior hardware hacking experience required)

Participant Requirements

Participants should have:

All major operating systems are supported.

What Participants Will Receive

Each participant will receive:

Trainer

This training will be delivered by senior members of a globally recognized AI Red Team, with extensive experience assessing high-risk AI systems, foundational models, and AI-powered copilots in production environments. Their work has directly influenced modern AI red teaming practices across security, safety, and responsible AI domains.