Breaking GenAI: Offensive Security for Modern AI Systems is an advanced, hands-on training at CSA XCON 2026, Dehradun, focused on red teaming modern AI and GenAI systems used in real-world, high-risk environments.
As AI systems rapidly move from experimentation to production, red teaming AI is no longer optional. Global regulations, enterprise governance requirements, and real-world incidents now mandate systematic testing of foundational models and AI-powered applications before deployment. This training equips security professionals with the skills to proactively identify security, safety, and responsible AI failures across the AI lifecycle.
Participants will learn how to assess AI systems holistically by combining traditional security testing, adversarial machine learning, and Responsible AI red teaming techniques.
This training is built on real-world red team methodologies used to assess production-grade AI systems, including large language models and multimodal AI applications.
Key highlights include:
The course strongly aligns with CSA XCON’s focus on future-ready cybersecurity, addressing one of the most critical and fast-evolving threat landscapes
Participants will gain hands-on experience with:
Participants will leave with a structured approach to breaking and securing AI systems in enterprise environments.
In the advanced modules, participants will:
This ensures learnings are immediately applicable to real-world governance, risk, and compliance requirements.
This training is ideal for:
Intermediate (No prior hardware hacking experience required)
Participants should have:
All major operating systems are supported.
Each participant will receive:
This training will be delivered by senior members of a globally recognized AI Red Team, with extensive experience assessing high-risk AI systems, foundational models, and AI-powered copilots in production environments. Their work has directly influenced modern AI red teaming practices across security, safety, and responsible AI domains.