394.110 kr.
Ekki til á lager
This intensive two-day course explores the security risks and challenges introduced by Large Language Models (LLMs) as they become embedded in modern digital systems. Through AI labs and real-world threat simulations, participants will develop the practical expertise to detect, exploit, and remediate vulnerabilities in AI-powered environments.
The course uses a defence-by-offence methodology, helping learners build secure, reliable, and efficient LLM applications. Content is continuously updated to reflect the latest threat vectors, exploits, and mitigation strategies, making this training essential for AI developers, security engineers, and system architects working at the forefront of LLM deployment.
Participants should have:
This course is ideal for:
By the end of this course, learners will be able to:
Prompt engineering
Prompt injection
Lab activities:
ReACT LLM agent prompt injection
Lab activities:
Insecure output handling
Lab activities:
Training data poisoning
Lab activities:
Supply chain vulnerabilities
Sensitive information disclosure
Lab activities:
Insecure plugin design
Lab activities:
Excessive agency in LLM systems
Lab activities:
Overreliance in LLMs
This course does not include formal certification. Participants will complete multiple hands-on labs simulating attacker tactics and securing LLM implementations. These labs are designed to assess comprehension, critical thinking, and applied technical skill.
This course includes: