Introduction
Welcome to the Lab Book instructions for Steve's Chat Playground!
This project provides hands-on experience with LLM/Gen AI security concepts through 17 comprehensive exercises. You can approach this in two ways:
- Free Play: Just go play with the deployed app (either locally or with the deployed demo)
- Structured Learning: Follow this Lab Book for a guided path through many security concepts
Each lab includes several exercises with varying skill levels:
Skill Level 1
No special skills required
Skill Level 2
Some sysadmin or developer knowledge
Skill Level 3
Requires real developer skills
Note: L1 and L2 skill requirements are not extremely deep - almost anyone can work through them if they want to try. You can skip advanced exercises and still proceed to the next lab. Only L3 requires real developer chops.
Prerequisites: The most common requirement will be an OpenAI API key (to power models). Most exercises can be run for free with no external requirements.
Lab Exercises
5 progressive labs with 17 total exercises covering essential LLM security concepts
Lab 1: First Steps
Get familiar with the playground project and meet Eliza, a simple local chatbot. Learn about the project structure and basic bot functionality.
Lab 2: Broken Bot
Meet Oscar, a simulated jailbroken bot, and learn about guardrails. Explore how simple filters can prevent harmful content and understand basic security measures.
Lab 3: Locking the Front Door and Back Door
Fight prompt injection attacks and learn about output filtering. Understand how to protect against both input and output vulnerabilities in AI systems.
Lab 4: Simple vs. Smart
Compare local filters with AI-powered moderation. Learn about advanced prompt injection techniques and automated testing for security measures.
Lab 5: Go Bananas
Advanced exercises for developers! Create custom blocklists, build PII guardrails, and develop robust security measures. Everything here is extra credit.
To Learn More About LLM Security
Ready to dive deeper into LLM and Generative AI security? Check out these excellent resources:
The Developer's Playbook for Large Language Model Security
Comprehensive guide to securing Large Language Models and Generative AI applications. Learn practical techniques for protecting AI systems from various attack vectors.
OWASP Gen AI Security Project
The OWASP Top 10 for LLM Applications and comprehensive resources for securing generative AI systems. Industry-standard guidance from the leading application security organization.