Training Overview
The rapid adoption of Generative AI and Large Language Models (LLMs) has introduced an entirely new attack surface.
Unlike traditional applications, LLM-powered systems operate in probabilistic execution environments, interact with external tools, process natural language instructions, and often operate with autonomous decision-making capabilities.
This 2-day intensive bootcamp is designed to equip participants with:
- A deep understanding of AI/LLM-specific attack vectors
- Practical exploitation techniques used against real-world GenAI systems
- Defensive engineering methodologies to mitigate emerging threats
- Hands-on experience deploying and attacking private LLM systems
- Secure design patterns for AI-enabled applications
Participants will leave with actionable knowledge to assess, exploit, and defend modern AI systems.
Day 1 - Foundations & Exploitation
Module:
Understanding the AI Attack Surface
Module Overview
This module builds the technical foundation required to understand why AI systems break differently than traditional applications.
We analyze the architectural differences between conventional software and LLM-powered systems, focusing on:
- Prompt pipelines
- Retrieval Augmented Generation (RAG)
- Tool usage and function calling
- Embeddings and vector databases
- Agentic workflows
Participants will learn how traditional vulnerabilities evolve in AI contexts and why new classes of vulnerabilities emerge.
Key Topics Covered
- Evolution from AI → ML → GenAI → LLMs
- Anatomy of an LLM-powered application
- Probabilistic execution vs deterministic logic
- AI-specific threat modeling
- Mapping traditional AppSec risks to LLM systems
- Deep dive into OWASP Top 10 for LLMs
- Real-world LLM breaches and case studies
Hands-On Labs
- Deploying a local/private LLM
- Identifying AI-specific attack surfaces
- Security analysis of a sample GenAI application
- Mapping vulnerabilities to OWASP LLM categories
Domain 2: Semantic Input Validation in AI Systems
Module:
Beyond Regex: Context-Aware Validation with LLMs
Module Overview
Traditional rule-based validation fails when inputs are complex, contextual, or ambiguous. AI-powered systems require semantic-level validation.
This module explores how to implement intelligent validation mechanisms using LLMs themselves — without introducing new vulnerabilities.
Participants will learn:
- The difference between syntactic vs semantic validation
- When rule-based validation breaks
- Using LLMs to enforce policy-level constraints
- Designing secure semantic validation pipelines
Key Topics Covered
- Defining semantic input validation
- Context-aware policy enforcement
- AI guardrails vs deterministic filters
- Secure LLM validation architecture
- Risks of recursive AI validation
- Prompt structure validation techniques
- Output constraint enforcement using structured schemas
Practical Labs
- Implement semantic validation using LLM + Pydantic
- Build policy-based content enforcement engine
- Prevent malicious instruction injection via structured validation
- Test bypass scenarios
Day 2 — Advanced Exploitation & Defense
Domain 3: Prompt Injection - Offensive & Defensive Perspectives
Module
Prompt Injection: The SQL Injection of AI Systems
Module Overview
Prompt Injection is one of the most critical vulnerabilities affecting LLM-powered applications today.
In this advanced module, participants will explore:
- The mechanics of prompt injection
- Direct and indirect injection techniques
- Data exfiltration attacks
- Jailbreaking strategies
- Tool manipulation attacks
- Cross-context contamination
Participants will attack real deployed LLM systems and observe exploitation outcomes.
Attack Techniques Covered
- Direct prompt override attacks
- Instruction hierarchy manipulation
- Data exfiltration via hidden prompts
- Retrieval manipulation in RAG pipelines
- Multi-step reasoning manipulation
- System prompt leakage techniques
- Agent workflow hijacking
Defensive Strategies Covered
- Prompt isolation patterns
- Context segmentation
- Structured prompting
- Output verification layers
- Multi-model verification
- LLM firewall strategies
- Guardrail frameworks
Labs
- Deploy private LLM
- Execute multiple injection attacks
- Extract sensitive system prompts
- Implement and test defense mechanisms
- Measure probabilistic bypass success rates
Domain 4: Agentic AI Exploitation & Security
Module
Attacking Autonomous AI Agents
Module Overview
Modern AI systems increasingly operate as agents with tool access, API integrations, and decision-making capabilities.
This module explores how attackers can exploit:
- Tool invocation systems
- Function calling pipelines
- API chaining
- External data sources
- Multi-agent communication systems
We analyze how excessive autonomy creates severe security risk.
Topics Covered
- What are AI Agents?
- Function calling risks
- Tool abuse and over-permissioned systems
- Excessive agency vulnerabilities
- Agent-to-agent attack vectors
- Chained prompt injection
- Data poisoning risks
- Overreliance on LLM reasoning
Hands-On Labs
- Build a simple AI agent
- Manipulate tool execution via injection
- Force unauthorized API calls
- Demonstrate excessive agency exploitation
- Implement least-privilege agent controls
- Apply containment and sandbox strategies
What Participants Will Build
By the end of this bootcamp, attendees will have:
- Deployed and attacked private LLM environments
- Performed prompt injection against live systems
- Implemented semantic validation pipelines
- Built a hardened AI agent architecture
- Understood how to secure RAG systems
- Applied OWASP LLM Top 10 controls
Key Takeaways
Participants will leave with:
- Practical red teaming techniques for AI systems
- Defensive design patterns for LLM security
- Understanding of real-world AI attack cases
- Framework to secure enterprise GenAI deployments
- AI threat modeling methodology
- Hands-on exploitation experience
Why This Bootcamp Stands Out
- Fully hands-on technical workshop
- Attack + Defense coverage
- Realistic exploitation labs
- Private LLM deployment exercises
- Conference-grade deep technical content
- Built specifically for security professionals