Expand description
Large Language Model (LLM) Specific Debugging
This module provides specialized debugging capabilities for large language models, focusing on safety, alignment, factuality, toxicity detection, and performance characteristics specific to modern LLMs.
Structs§
- Alignment
Analysis Result - Alignment
Metrics - Metrics for alignment monitoring
- Alignment
Monitor - Alignment monitor for ensuring LLM outputs align with intended behavior
- BatchLLM
Analysis Report - Batch
Metrics - Bias
Analysis Result - Bias
Detector - Bias detector for identifying various forms of bias in LLM outputs
- Bias
Metrics - Metrics for bias detection
- Consistency
Checker - Consistency checker for internal consistency in responses
- Context
Tracker - Context tracking for conversation continuity
- Conversation
Analysis Result - Conversation
Analyzer - Conversation analyzer for multi-turn dialog analysis
- Conversation
Turn - Single turn in a conversation
- Critical
Issue - Dialog
Metrics - Metrics for dialog analysis
- Efficiency
Metrics - Metrics for computational efficiency
- Factuality
Analysis Result - Factuality
Checker - Factuality checker for verifying the accuracy of LLM outputs
- Factuality
Metrics - Metrics for tracking factual accuracy
- Generation
Metrics - Metrics for text generation performance
- Hallucination
Analysis Result - Hallucination
Detector - Hallucination detector for identifying false or fabricated information
- Hallucination
Metrics - Metrics for hallucination detection
- Health
Summary - LLMAnalysis
Report - LLMDebug
Config - Configuration for LLM debugging
- LLMDebugger
- Main LLM debugging framework
- LLMHealth
Report - LLMPerformance
Profiler - Performance profiler specific to LLM characteristics
- Performance
Analysis Result - Quality
Metrics - Metrics for output quality
- Safety
Analysis Result - Safety
Analyzer - Safety analyzer for detecting harmful, toxic, or inappropriate content
- Safety
Metrics - Safety metrics for tracking harmful content
- Scalability
Metrics - Metrics for scalability analysis
Enums§
- Alignment
Objective - Types of alignment objectives for LLMs
- Alignment
Trend - Trend in alignment scores over time
- Bias
Category - Types of bias to detect in LLM outputs
- Harm
Category - Categories of potential harm in LLM outputs
- Health
Status - Issue
Category - Issue
Severity - Risk
Level - Safety
Trend - Trend in safety scores over time
Functions§
- llm_
debugger - Create a new LLM debugger with default configuration
- llm_
debugger_ with_ config - Create a new LLM debugger with custom configuration
- performance_
focused_ config - Create a performance-focused LLM debugger configuration
- safety_
focused_ config - Create a safety-focused LLM debugger configuration