Sample application with intentional LLM security vulnerabilities for testing AI governance scanners.
DO NOT use this code in production. Every file contains deliberately insecure patterns.
| File | Vulnerability | OWASP LLM ID |
|---|---|---|
src/chatbot.py |
Prompt injection | LLM01 |
src/chatbot.js |
Prompt injection | LLM01 |
api/customer_support.py |
PII exposure | LLM06 |
api/medical_assistant.js |
PII exposure | LLM06 |
agents/auto_agent.py |
Excessive agency | LLM08 |
agents/task_runner.js |
Excessive agency | LLM08 |
src/output_handler.py |
Jailbreak / insecure output | LLM02 |
src/render_response.js |
Jailbreak / insecure output | LLM02 |