1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
//! Guardrails Framework
//!
//! A decoupled content safety and validation system for AI Gateway.
//!
//! # Features
//!
//! - **OpenAI Moderation**: Integration with OpenAI's content moderation API
//! - **PII Detection**: Detect and mask personally identifiable information
//! - **Prompt Injection**: Detect potential prompt injection attacks
//! - **Custom Rules**: Define custom guardrail rules
//! - **Middleware**: Actix-web middleware for request/response filtering
//!
//! # Architecture
//!
//! The guardrails system is designed with the following principles:
//! - **Decoupled**: Each guardrail is independent and can be enabled/disabled
//! - **Extensible**: Easy to add new guardrail types via the `Guardrail` trait
//! - **Async**: All operations are non-blocking
//! - **Configurable**: Fine-grained control over behavior
//!
//! # Quick Start
//!
//! ```rust,ignore
//! use litellm_rs::core::guardrails::{GuardrailEngine, GuardrailConfig};
//!
//! let config = GuardrailConfig::default()
//! .enable_openai_moderation(true)
//! .enable_pii_detection(true);
//!
//! let engine = GuardrailEngine::new(config).await?;
//!
//! // Check content
//! let result = engine.check_input("Hello, world!").await?;
//! if result.is_blocked() {
//! println!("Content blocked: {:?}", result.reasons());
//! }
//! ```
// Re-export main types
pub use ;
pub use GuardrailEngine;
pub use ;
pub use OpenAIModerationGuardrail;
pub use PIIGuardrail;
pub use PromptInjectionGuardrail;
pub use Guardrail;
pub use ;