See What You'll Learn in
This Intensive Workshop
Watch our instructor walk through the full AI Security Specialist curriculum, what to expect from each Saturday session, and how the hands-on labs work.
Everything You Need to Master
AI Security — In One Workshop
Offensive AI Labs
Attack AI systems like a red-teamer. Practice prompt injection, adversarial attacks, model poisoning, and LLM exploitation in a real cloud environment.
Defensive AI Labs
Build production-grade defenses: input validation, AI firewalls, threat modeling, incident response, and secure ML pipeline implementation.
Live Instructor Sessions
Every Saturday, 2 hours of live instruction with real-time Q&A. Ask questions, get feedback, and learn directly from an industry expert.
Real Cloud Lab Environment
No setup hassles. Your browser-based cloud lab is ready from day one — attack and defend real AI systems, not simulations.
Session Recordings Included
Miss a Saturday or want to re-watch? Every session is recorded and available to you forever — learn at your own pace.
CAISP™ Certification
Complete all 12 sessions and the capstone project to earn the globally-recognized Certified AI Security Professional™ credential.
Global Cohort Community
Learn alongside security professionals from around the world. Network, collaborate, and grow together in your private cohort community.
Career-Ready Portfolio
Graduate with documented capstone projects that showcase your AI security skills to future employers and clients.
Saturdays — Zero Career Disruption
Learn without quitting your job. Every session is on Saturday for 2 hours — designed around working security professionals.
Extreme Hands-On Experience —
The #1 Reason Students Choose Us
Theory without practice means nothing in cybersecurity. Every module includes real lab exercises where you attack and defend live AI systems in a cloud environment. You leave each session having done the work — not just watched it.
Think like an attacker. Exploit vulnerabilities in AI systems, LLMs, and ML pipelines before your adversaries do.
- Scanning LLMs for agent-based vulnerabilities & attacking AI chatbots
- Performing adversarial attacks using the TextAttack framework
- Prompt injection attacks — bypassing system prompts & security controls
- Training data poisoning & excessive agency exploitation
- Adversarial attacks using Foolbox framework
- Insecure plugin exploitation in LLM ecosystems
- Data leakage exploitation & permission escalation in LLM systems
- Poisoned pipeline attack simulation — compromising ML deployments
- Dependency confusion attacks on package management systems
- Supply chain dependency attack exploitation
- Backdoor attacks using BackdoorBox toolkit
- Red team AI systems using professional offensive methodologies
- Web scraping attacks with PyScrap & steganography using StegnoGAN
- Penetration testing against LLM applications and APIs
Build resilient AI systems. Implement layered defenses that stop real-world attacks at every layer of the AI stack.
- Setup browser-based lab environment & InvokeAI creative visual tool
- Create secure chatbot using Python & machine learning frameworks
- Testing defenses with Adversarial Robustness Toolbox (ART)
- Implementing defensive controls for LLM applications (layered)
- AI threat modeling using STRIDE methodology
- Risk rating workshop — calculating likelihood and impact
- AI threat modeling with IriusRisk & StrideGPT automated tools
- Implementing SCA tools & model scanning for AI security projects
- Generating SBOMs, attestations, and model signing
- Developing comprehensive AI governance framework & compliance
- Adversarial robustness testing & defensive distillation
- Implementing AI-specific Web Application Firewalls (WAFs) & guardrails
- Comprehensive AI security assessment capstone project
- Model watermarking & fingerprinting to detect theft
12 Modules · Scratch to Advanced
A meticulously structured learning path built on real-world AI security scenarios. Each module includes theory, live demonstrations, and hands-on lab exercises.
- AI evolution & security implications across industries
- ML Fundamentals: supervised, unsupervised, reinforcement learning
- Deep learning architectures: neural networks & CNNs
- NLP & computer vision security considerations
- RAG architectures and their security implications
- LLM architectures: GPT & BERT transformers
- MITRE ATT&CK and ATLAS frameworks for AI attacks
- Real-world malicious LLM tools: WormGPT & FraudGPT
- Adversarial attacks using TextAttack framework
- Attacking AI chatbots & scanning LLMs for vulnerabilities
- Complete OWASP Top 10 for LLM applications
- Direct & indirect prompt injection exploitation
- Training data poisoning attacks
- Model denial of service & supply chain vulnerabilities
- Sensitive information disclosure from LLM training data
- DevSecOps principles integrating security into AI development
- Attacks targeting CI/CD pipelines
- Real incidents: Hugging Face breaches & SAP AI Core vulnerabilities
- Software Composition Analysis (SCA) for AI projects
- AI firewalls guarding models against adversarial inputs
- STRIDE methodology for AI system threat modeling
- Creating Data Flow Diagrams (DFDs) for LLM applications
- OWASP LLM, MITRE ATLAS & BIML threat libraries
- AI Risk Repository & AI Incident Database analysis
- Automated threat modeling with IriusRisk & StrideGPT
- AI supply chain attack surface: data, models, infrastructure
- Compromised dependencies & backdoored model analysis
- Vetting processes for third-party AI frameworks
- SLSA & SCVS framework implementation
- Software Bill of Materials (SBOMs) & model signing
- Model-mediated supply chain attacks
- Self-propagating AI model worms
- Backdoors in fine-tuning processes
- AI-assisted evolving firmware & polymorphic malware
- Real-world AI security breach case studies
- NIST AI Risk Management Framework (RMF)
- ISO/IEC 42001 AI management system standards
- EU AI Act requirements & risk classification
- US AI legislation & executive orders
- Comprehensive compliance checklist for AI security
- Input validation & sanitization preventing prompt injection
- Output filtering & content moderation systems
- Access controls & authentication for model APIs
- Model watermarking & fingerprinting
- AI-specific WAFs & guardrails deployment
- FGSM & PGX adversarial example generation
- Adversarial training hardening models against perturbations
- Ensemble methods & model diversity strategies
- Certified defenses providing robustness guarantees
- Continuous monitoring & anomaly detection
- AI-specific incident response procedures & playbooks
- Detecting model poisoning & data exfiltration incidents
- Forensic investigations on compromised ML systems
- Containment strategies for compromised models
- Recovery procedures: model rollback & retraining
- Comprehensive AI security assessment methodology
- Penetration testing against LLM applications & APIs
- Red team operations simulating adversarial attacks
- Automated security testing tools for continuous validation
- CAISP™ certification exam preparation & capstone project
12 Saturdays · 2 Hours Each
Every session runs live from May 30 to September 12, 2026. Designed for working security professionals — no career disruption required.
All sessions are live on Saturdays — designed for working professionals. Each session runs 2 hours with live Q&A. Recordings are available within 24 hours.
See What You'll Learn in
This Intensive Workshop
Watch our instructor walk through the full AI Security Specialist curriculum, what to expect from each Saturday session, and how the hands-on labs work.
Everything You Need to Master
AI Security — In One Workshop
Offensive AI Labs
Attack AI systems like a red-teamer. Practice prompt injection, adversarial attacks, model poisoning, and LLM exploitation in a real cloud environment.
Defensive AI Labs
Build production-grade defenses: input validation, AI firewalls, threat modeling, incident response, and secure ML pipeline implementation.
Live Instructor Sessions
Every Saturday, 2 hours of live instruction with real-time Q&A. Ask questions, get feedback, and learn directly from an industry expert.
Real Cloud Lab Environment
No setup hassles. Your browser-based cloud lab is ready from day one — attack and defend real AI systems, not simulations.
Session Recordings Included
Miss a Saturday or want to re-watch? Every session is recorded and available to you forever — learn at your own pace.
CAISP™ Certification
Complete all 12 sessions and the capstone project to earn the globally-recognized Certified AI Security Professional™ credential.
Global Cohort Community
Learn alongside security professionals from around the world. Network, collaborate, and grow together in your private cohort community.
Career-Ready Portfolio
Graduate with documented capstone projects that showcase your AI security skills to future employers and clients.
Saturdays — Zero Career Disruption
Learn without quitting your job. Every session is on Saturday for 2 hours — designed around working security professionals.
Extreme Hands-On Experience —
The #1 Reason Students Choose Us
Theory without practice means nothing in cybersecurity. Every module includes real lab exercises where you attack and defend live AI systems in a cloud environment. You leave each session having done the work — not just watched it.
Think like an attacker. Exploit vulnerabilities in AI systems, LLMs, and ML pipelines before your adversaries do.
- Scanning LLMs for agent-based vulnerabilities & attacking AI chatbots
- Performing adversarial attacks using the TextAttack framework
- Prompt injection attacks — bypassing system prompts & security controls
- Training data poisoning & excessive agency exploitation
- Adversarial attacks using Foolbox framework
- Insecure plugin exploitation in LLM ecosystems
- Data leakage exploitation & permission escalation in LLM systems
- Poisoned pipeline attack simulation — compromising ML deployments
- Dependency confusion attacks on package management systems
- Supply chain dependency attack exploitation
- Backdoor attacks using BackdoorBox toolkit
- Red team AI systems using professional offensive methodologies
- Web scraping attacks with PyScrap & steganography using StegnoGAN
- Penetration testing against LLM applications and APIs
Build resilient AI systems. Implement layered defenses that stop real-world attacks at every layer of the AI stack.
- Setup browser-based lab environment & InvokeAI creative visual tool
- Create secure chatbot using Python & machine learning frameworks
- Testing defenses with Adversarial Robustness Toolbox (ART)
- Implementing defensive controls for LLM applications (layered)
- AI threat modeling using STRIDE methodology
- Risk rating workshop — calculating likelihood and impact
- AI threat modeling with IriusRisk & StrideGPT automated tools
- Implementing SCA tools & model scanning for AI security projects
- Generating SBOMs, attestations, and model signing
- Developing comprehensive AI governance framework & compliance
- Adversarial robustness testing & defensive distillation
- Implementing AI-specific Web Application Firewalls (WAFs) & guardrails
- Comprehensive AI security assessment capstone project
- Model watermarking & fingerprinting to detect theft
12 Modules · Scratch to Advanced
A meticulously structured learning path built on real-world AI security scenarios. Each module includes theory, live demonstrations, and hands-on lab exercises.
- AI evolution & security implications across industries
- ML Fundamentals: supervised, unsupervised, reinforcement learning
- Deep learning architectures: neural networks & CNNs
- NLP & computer vision security considerations
- RAG architectures and their security implications
- LLM architectures: GPT & BERT transformers
- MITRE ATT&CK and ATLAS frameworks for AI attacks
- Real-world malicious LLM tools: WormGPT & FraudGPT
- Adversarial attacks using TextAttack framework
- Attacking AI chatbots & scanning LLMs for vulnerabilities
- Complete OWASP Top 10 for LLM applications
- Direct & indirect prompt injection exploitation
- Training data poisoning attacks
- Model denial of service & supply chain vulnerabilities
- Sensitive information disclosure from LLM training data
- DevSecOps principles integrating security into AI development
- Attacks targeting CI/CD pipelines
- Real incidents: Hugging Face breaches & SAP AI Core vulnerabilities
- Software Composition Analysis (SCA) for AI projects
- AI firewalls guarding models against adversarial inputs
- STRIDE methodology for AI system threat modeling
- Creating Data Flow Diagrams (DFDs) for LLM applications
- OWASP LLM, MITRE ATLAS & BIML threat libraries
- AI Risk Repository & AI Incident Database analysis
- Automated threat modeling with IriusRisk & StrideGPT
- AI supply chain attack surface: data, models, infrastructure
- Compromised dependencies & backdoored model analysis
- Vetting processes for third-party AI frameworks
- SLSA & SCVS framework implementation
- Software Bill of Materials (SBOMs) & model signing
- Model-mediated supply chain attacks
- Self-propagating AI model worms
- Backdoors in fine-tuning processes
- AI-assisted evolving firmware & polymorphic malware
- Real-world AI security breach case studies
- NIST AI Risk Management Framework (RMF)
- ISO/IEC 42001 AI management system standards
- EU AI Act requirements & risk classification
- US AI legislation & executive orders
- Comprehensive compliance checklist for AI security
- Input validation & sanitization preventing prompt injection
- Output filtering & content moderation systems
- Access controls & authentication for model APIs
- Model watermarking & fingerprinting
- AI-specific WAFs & guardrails deployment
- FGSM & PGX adversarial example generation
- Adversarial training hardening models against perturbations
- Ensemble methods & model diversity strategies
- Certified defenses providing robustness guarantees
- Continuous monitoring & anomaly detection
- AI-specific incident response procedures & playbooks
- Detecting model poisoning & data exfiltration incidents
- Forensic investigations on compromised ML systems
- Containment strategies for compromised models
- Recovery procedures: model rollback & retraining
- Comprehensive AI security assessment methodology
- Penetration testing against LLM applications & APIs
- Red team operations simulating adversarial attacks
- Automated security testing tools for continuous validation
- CAISP™ certification exam preparation & capstone project
12 Saturdays · 2 Hours Each
Every session runs live from May 30 to September 12, 2026. Designed for working security professionals — no career disruption required.
All sessions are live on Saturdays — designed for working professionals. Each session runs 2 hours with live Q&A. Recordings are available within 24 hours.