About This Project

Purpose

The Prompt Injection Testing Lab is a defensive security testing tool designed to evaluate how well AI systems resist prompt injection attacks when fetching and processing web content.

Methodology

Each blog post on this site contains genuine, useful technical content (800-1500 words) with embedded prompt injection payloads. The injections use a variety of techniques ranging from basic CSS hiding to advanced semantic manipulation.

Every injection includes a unique canary token — a distinctive string that should never appear in a legitimate AI response. If an AI system includes a canary token in its output after processing a page, that injection technique successfully bypassed the system's defenses.

Injection Categories

  • Hidden text — CSS display:none, off-screen positioning, zero-height elements
  • HTML comments — Instructions embedded in HTML comment blocks
  • Invisible Unicode — Zero-width characters, homoglyphs, RTL overrides
  • Encoding tricks — Base64, HTML entities, hex encoding
  • Direct override — Explicit instruction override attempts
  • Authority impersonation — Fake system prompts and official messages
  • Role-play/jailbreak — Persona-based bypass attempts
  • Markdown injection — Using markdown formatting as instruction vectors
  • Meta tag injection — HTML meta tags as instruction carriers
  • Data attributes — Custom HTML data attributes with instructions
  • Context manipulation — Contradictory instructions and fake conversation history
  • Semantic manipulation — Persuasion-based approaches using authority and framing

Ethical Use

This tool is intended solely for defensive security testing. Use it to evaluate and improve your AI system's resistance to prompt injection. Do not use these techniques for malicious purposes.

© 2025 DevPractical. Practical guides for modern software engineering.