This course provides a comprehensive introduction to AI security and the evolving risks that accompany modern artificial intelligence systems. Participants explore how attackers exploit vulnerabilities in predictive and generative models, including prompt injection, model jailbreaks, denial of service attacks, model theft, and data poisoning. The course examines the full attack surface of AI systems, from training datasets to deployed applications, and equips learners with practical defence strategies using security APIs, structured prompt defences, and robust infrastructure design. Through hands-on exercises and real-world scenarios, participants learn how to build responsible, reliable, and secure AI capabilities that protect organisational assets and maintain trust in AI-augmented systems.
Participants should have:
This course is designed for:
By the end of this course, learners will be able to:
Introduction to AI security
The AI security landscape
Prompt injection
Model jailbreaks
Prompt extraction
Defending AI systems
Visual prompt injection
Denial of service
Model theft
LLM integration
Training data manipulation
Secure supply chain
Human-AI interaction
Secure AI infrastructure
This course provides extensive practical experience through: