PromptLock is an open-source engine that sanitizes, detects, and blocks prompt injection attacks before they reach your LLM.
Try breaking it — type a prompt injection in the sandbox →Try breaking it — type a prompt injection below ↓
Send a prompt to an LLM — toggle the shield to see the difference.
Enter a prompt or click an example above.
Jailbreaks, instruction override, system prompt extraction, encoded payloads, PII leaks — blocked before they reach the model.
Jailbreaks, instruction override, prompt leaks, token smuggling, context overflow.
200 embedded attack samples. Catches paraphrased attacks no regex can match.
Email, phone, credit card (Luhn), SSN, API keys — redacted before reaching the LLM.
Randomized XML tags separate trusted instructions from untrusted user data.
NFKC normalization blocks homoglyph attacks. Invisible char stripping. Base64/Hex decoding.
Optional local model classifies novel attacks that bypass pattern matching.
More layers catch more attacks — but add latency. You choose the tradeoff.
Blocks only confirmed attacks (score ≥ 70). Fastest path for high-throughput apps.
Blocks likely attacks (score ≥ 40). Best balance of speed and coverage for most apps.
Blocks suspicious inputs (score ≥ 15). Maximum security. Requires Ollama for embeddings + judge.
For context: a typical LLM API call takes 500–3000ms. Even aggressive mode adds less latency than the LLM itself.
shield, _ := promptlock.New(
promptlock.WithLevel(promptlock.Balanced),
promptlock.WithRedactPII(true),
)
safe, err := shield.Protect(ctx, userInput)
if err != nil {
// blocked — prompt injection detected
}from promptlock import Shield
shield = Shield(level="balanced", redact_pii=True)
safe = shield.protect(user_input)
clean = shield.verify_context(rag_chunks)import { Shield } from '@hatchedland/prompt-lock';
const shield = new Shield({ level: 'balanced' });
const safe = shield.protect(userInput);
const clean = shield.verifyContext(ragChunks);Two lines to protect. One for the user query, one for the RAG context.
def secure_search(query):
safe_query = shield.protect(query) # 1. protect input
chunks = vector_db.query(safe_query) # 2. search
clean = shield.verify_context(chunks) # 3. verify context
return llm.generate(safe_query, clean) # 4. generateBuilt by Cawght — AI-powered security testing for applications that need to enforce their own rules.

Also from us
Record your app, let AI find where the business rules break. Test for privilege escalation, IDOR, race conditions, and more.
Learn more