Expand description
Safety layer for prompt injection defense.
This crate provides protection against prompt injection attacks by:
- Detecting suspicious patterns in external data
- Sanitizing tool outputs before they reach the LLM
- Validating inputs before processing
- Enforcing safety policies
- Detecting secret leakage in outputs
Structs§
- Injection
Warning - Warning about a potential injection attempt.
- Leak
Detector - Detector for secret leaks in output data.
- Leak
Match - A detected potential secret leak.
- Leak
Pattern - A pattern for detecting secret leaks.
- Leak
Scan Result - Result of scanning content for leaks.
- Policy
- Safety policy containing rules.
- Policy
Rule - A policy rule that defines what content is blocked or flagged.
- Safety
Config - Safety configuration.
- Safety
Layer - Unified safety layer combining sanitizer, validator, and policy.
- Sanitized
Output - Result of sanitizing external content.
- Sanitizer
- Sanitizer for external data.
- Validation
Result - Result of validating input.
- Validator
- Input validator.
Enums§
- Leak
Action - Action to take when a leak is detected.
- Leak
Detection Error - Error from leak detection.
- Leak
Severity - Severity of a detected leak.
- Policy
Action - Action to take when a policy is violated.
- Severity
- Severity level for safety issues.
Functions§
- params_
contain_ manual_ credentials - Check whether HTTP request parameters contain manually-provided credentials.
- wrap_
external_ content - Wrap external, untrusted content with a security notice for the LLM.