LLM Labs
Interactive tools for prompt engineering, LLM security testing, and AI workflow experimentation. All processing happens client-side.
prompt engineeringsecurity testingclient-side
/lab --prompt-analysis
Prompt Analyzer
/lab --jailbreak-detector
Jailbreak Pattern Detector
Paste a prompt to detect common jailbreak/injection patterns. Tests include DAN, role hijacking, encoding evasion, and more.
/lab --token-estimator
Token & Cost Estimator
/lab --prompt-builder
System Prompt Builder
Generated System Prompt
(Fill in fields to generate a system prompt)
/llm --projects