Skip to main content

Module llm_debugging

Module llm_debugging 

Source
Expand description

Large Language Model (LLM) Specific Debugging

This module provides specialized debugging capabilities for large language models, focusing on safety, alignment, factuality, toxicity detection, and performance characteristics specific to modern LLMs.

Structs§

AlignmentAnalysisResult
AlignmentMetrics
Metrics for alignment monitoring
AlignmentMonitor
Alignment monitor for ensuring LLM outputs align with intended behavior
BatchLLMAnalysisReport
BatchMetrics
BiasAnalysisResult
BiasDetector
Bias detector for identifying various forms of bias in LLM outputs
BiasMetrics
Metrics for bias detection
ConsistencyChecker
Consistency checker for internal consistency in responses
ContextTracker
Context tracking for conversation continuity
ConversationAnalysisResult
ConversationAnalyzer
Conversation analyzer for multi-turn dialog analysis
ConversationTurn
Single turn in a conversation
CriticalIssue
DialogMetrics
Metrics for dialog analysis
EfficiencyMetrics
Metrics for computational efficiency
FactualityAnalysisResult
FactualityChecker
Factuality checker for verifying the accuracy of LLM outputs
FactualityMetrics
Metrics for tracking factual accuracy
GenerationMetrics
Metrics for text generation performance
HallucinationAnalysisResult
HallucinationDetector
Hallucination detector for identifying false or fabricated information
HallucinationMetrics
Metrics for hallucination detection
HealthSummary
LLMAnalysisReport
LLMDebugConfig
Configuration for LLM debugging
LLMDebugger
Main LLM debugging framework
LLMHealthReport
LLMPerformanceProfiler
Performance profiler specific to LLM characteristics
PerformanceAnalysisResult
QualityMetrics
Metrics for output quality
SafetyAnalysisResult
SafetyAnalyzer
Safety analyzer for detecting harmful, toxic, or inappropriate content
SafetyMetrics
Safety metrics for tracking harmful content
ScalabilityMetrics
Metrics for scalability analysis

Enums§

AlignmentObjective
Types of alignment objectives for LLMs
AlignmentTrend
Trend in alignment scores over time
BiasCategory
Types of bias to detect in LLM outputs
HarmCategory
Categories of potential harm in LLM outputs
HealthStatus
IssueCategory
IssueSeverity
RiskLevel
SafetyTrend
Trend in safety scores over time

Functions§

llm_debugger
Create a new LLM debugger with default configuration
llm_debugger_with_config
Create a new LLM debugger with custom configuration
performance_focused_config
Create a performance-focused LLM debugger configuration
safety_focused_config
Create a safety-focused LLM debugger configuration