Crate llm_shield_models

Crate llm_shield_models 

Source
Expand description

ML Model Infrastructure for LLM Shield

This crate provides infrastructure for loading and running ONNX models for ML-based security scanners.

Re-exports§

pub use model_loader::ModelLoader;
pub use model_loader::ModelConfig;
pub use model_loader::ModelType;
pub use tokenizer::TokenizerWrapper;
pub use tokenizer::TokenizerConfig;
pub use tokenizer::Encoding;
pub use inference::InferenceEngine;
pub use inference::InferenceResult;
pub use inference::TokenPrediction;
pub use inference::PostProcessing;
pub use registry::ModelRegistry;
pub use registry::ModelTask;
pub use registry::ModelVariant;
pub use registry::ModelMetadata;
pub use cache::ResultCache;
pub use cache::CacheConfig;
pub use cache::CacheStats;
pub use types::MLConfig;
pub use types::CacheSettings;
pub use types::HybridMode;
pub use types::DetectionMethod;
pub use types::InferenceMetrics;

Modules§

cache
Result caching with LRU eviction and TTL
inference
Inference Engine
model_loader
Model Loader with ONNX Runtime Integration
registry
Model Registry for LLM Shield
tokenizer
Tokenizer Wrapper for HuggingFace Tokenizers
types
Common Types for ML Model Integration

Type Aliases§

Result
Result type alias