LLM Labs

Interactive tools for prompt engineering, LLM security testing, and AI workflow experimentation. All processing happens client-side.

prompt engineeringsecurity testingclient-side

Prompt Analyzer

Jailbreak Pattern Detector

Paste a prompt to detect common jailbreak/injection patterns. Tests include DAN, role hijacking, encoding evasion, and more.

Token & Cost Estimator

System Prompt Builder

Generated System Prompt

(Fill in fields to generate a system prompt)

Related Projects