pub struct PromptGuard { /* private fields */ }Expand description
The prompt injection detection engine.
Maintains a configurable set of detection rules and a severity threshold for blocking. Inputs that match patterns at or above the threshold are blocked; those with lower-severity matches produce warnings.
Implementations§
Source§impl PromptGuard
impl PromptGuard
Sourcepub fn new() -> Self
pub fn new() -> Self
Create a new guard with built-in detection patterns and a default
block threshold of High.
Sourcepub fn with_config(config: PromptGuardConfig) -> Self
pub fn with_config(config: PromptGuardConfig) -> Self
Create a guard with custom configuration.
Sourcepub fn set_block_threshold(&mut self, threshold: InjectionSeverity)
pub fn set_block_threshold(&mut self, threshold: InjectionSeverity)
Set the minimum severity level that triggers blocking.
Sourcepub fn config(&self) -> &PromptGuardConfig
pub fn config(&self) -> &PromptGuardConfig
Get the current configuration.
Sourcepub fn add_pattern(
&mut self,
name: &str,
pattern: &str,
severity: InjectionSeverity,
description: &str,
)
pub fn add_pattern( &mut self, name: &str, pattern: &str, severity: InjectionSeverity, description: &str, )
Add a custom detection pattern.
Sourcepub fn scan_input(&self, text: &str) -> Vec<InjectionAlert>
pub fn scan_input(&self, text: &str) -> Vec<InjectionAlert>
Scan input text and return all injection alerts (pattern matching only).
Sourcepub fn scan(&self, input: &str) -> PromptGuardResult
pub fn scan(&self, input: &str) -> PromptGuardResult
Full scan with scoring, structural analysis, and threat assessment.
Sourcepub fn is_safe(&self, input: &str) -> bool
pub fn is_safe(&self, input: &str) -> bool
Quick check: returns true if the input is considered safe.
Sourcepub fn sanitize(&self, input: &str) -> String
pub fn sanitize(&self, input: &str) -> String
Sanitize input by stripping detected injection patterns.
Sourcepub fn scan_and_decide(&self, text: &str) -> ScanDecision
pub fn scan_and_decide(&self, text: &str) -> ScanDecision
Scan input text and return a decision: Allow, Warn, or Block.
This is the legacy API; prefer scan() for richer results.
Trait Implementations§
Source§impl Clone for PromptGuard
impl Clone for PromptGuard
Source§fn clone(&self) -> PromptGuard
fn clone(&self) -> PromptGuard
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source. Read more