ZeroLeaks
Shield SDK

What is @zeroleaks/shield

Runtime prompt security package for LLM applications. Hardening, injection detection, and output sanitization in under 5ms.

What is @zeroleaks/shield

npm version

@zeroleaks/shield is a runtime prompt security package for LLM applications. It adds defense-in-depth to your AI stack by hardening system prompts, detecting injection attempts in user input, and sanitizing model output for leaked prompt fragments. All operations complete in under 5ms and never mutate your objects.

Three Core Capabilities

harden — Injects security rules into system prompts to resist instruction override, role hijacking, and prompt extraction. Configurable persona anchoring and anti-extraction directives.

detect — Heuristic-based injection detection on user input. Normalizes Unicode (NFKC), bounds input length, and matches against 10 pattern categories. Default threshold is medium; default action on detection is block.

sanitize — N-gram matching to detect leaked system prompt fragments in model output. Redacts matches before returning responses. Configurable n-gram size and similarity threshold.

Provider Wrappers

Shield provides drop-in wrappers for popular LLM clients. Each wrapper applies hardening, detection, and sanitization automatically:

  • OpenAIshieldOpenAI(client, options) wraps chat.completions.create
  • AnthropicshieldAnthropic(client, options) wraps messages.create
  • GroqshieldGroq(client, options) wraps chat.completions.create
  • Vercel AI SDKshieldMiddleware(options) or shieldLanguageModelMiddleware(options) with wrapLanguageModel for generateText / streamText

Design Principles

  • Non-mutating — Caller objects are never mutated; copies are used internally
  • Unicode normalization — Input is normalized with NFKC before detection
  • Length bounds — Input and output are truncated to 1MB by default to avoid DoS
  • Fast — Target execution time under 5ms for all operations

Next Steps

On this page