Crate scouter_client

Crate scouter_client 

Source

Re-exports§

pub use drifter::scouter::PyDrifter;
pub use profiler::scouter::DataProfiler;
pub use crate::http::PyScouterClient;
pub use crate::http::ScouterClient;
pub use error::ClientError;

Modules§

data_utils
drifter
error
http
profiler

Structs§

ActiveSpan
ActiveSpan where all the magic happens The active Span attempts to maintain compatibility with the OpenTelemetry Span API
Alert
Alerts
Attribute
BaseTracer
The main Tracer class
BatchConfig
Bin
BinnedMetric
BinnedMetricStats
BinnedMetrics
BinnedPsiFeatureMetrics
BinnedPsiMetric
CharStats
ConsoleDispatchConfig
CustomDriftProfile
CustomInterval
CustomMetric
CustomMetricAlertCondition
CustomMetricAlertConfig
CustomMetricDriftConfig
CustomMetricFeatureQueue
CustomMetricServerRecord
DataProfile
Distinct
Doane
DriftAlertRequest
DriftRequest
EqualWidthBinning
EvaluationConfig
ExportConfig
FeatureMap
FeatureProfile
Features
FreedmanDiaconis
GetProfileRequest
GrpcConfig
GrpcSpanExporter
Histogram
Python class for a feature histogram
HttpConfig
HttpSpanExporter
KafkaConfig
LLMAlertConfig
LLMDriftConfig
LLMDriftMap
LLMDriftMetric
LLMDriftProfile
LLMDriftRecordPaginationRequest
LLMDriftServerRecord
LLMEvalMetric
LLMEvalRecord
LLMEvalResults
Enhanced results collection that captures both successes and failures
LLMEvalTaskResult
Struct for collecting results from LLM evaluation tasks.
LLMMetricAlertCondition
LLMMetricRecord
LLMRecord
LLMRecordQueue
LatencyMetrics
Manual
Metric
Metrics
MockConfig
NumProfiler
NumericStats
ObservabilityMetrics
Observer
OpsGenieDispatchConfig
OtelHttpConfig
PaginationCursor
PaginationResponse
ProfileRequest
ProfileStatusRequest
PsiAlertConfig
PsiChiSquareThreshold
PsiDriftConfig
PsiDriftMap
PsiDriftProfile
PsiFeatureDriftProfile
PsiFeatureQueue
PsiFixedThreshold
PsiMonitor
PsiNormalThreshold
PsiServerRecord
QuantileBinning
Quantiles
Python class for quantiles
QueueBus
QueueBus is an mpsc bus that allows for publishing events to subscribers. It leverage an unbounded channel Primary way to publish non-blocking events to background queues with ScouterQueue
RabbitMQConfig
RedisConfig
RegisteredProfileResponse
Rice
RouteMetrics
Scott
ScouterQueue
ScouterResponse
ScouterServerError
Common struct for returning errors from scouter server (axum response)
ServerRecords
SlackDispatchConfig
SpanEvent
SpanLink
SpcAlert
SpcAlertConfig
SpcAlertRule
SpcDriftConfig
Python class for a monitoring configuration
SpcDriftFeature
SpcDriftFeatures
SpcDriftMap
Python class for a Drift map of features with calculated drift
SpcDriftProfile
SpcFeatureAlert
SpcFeatureAlerts
SpcFeatureDrift
Python class for a feature drift
SpcFeatureDriftProfile
Python class for a monitoring profile
SpcFeatureQueue
SpcMonitor
SpcServerRecord
SquareRoot
StdoutSpanExporter
StringProfiler
StringStats
Sturges
TagRecord
TagsResponse
TaskState
TerrellScott
TestSpanExporter
TraceBaggageRecord
TraceBaggageResponse
TraceFilters
TraceListItem
TraceMetricBucket
TraceMetricsRequest
TraceMetricsResponse
TracePaginationResponse
TraceRecord
TraceSpan
TraceSpanRecord
TraceSpansResponse
UpdateAlertResponse
UpdateAlertStatus
VersionRequest
WordStats

Enums§

AlertDispatchType
AlertThreshold
AlertZone
CommonCrons
CompressionType
ContractError
DataProfileError
DataType
DriftError
DriftProfile
DriftType
EntityType
EvaluationError
EventError
Feature
FunctionType
OtelProtocol
ProfileError
PyEventError
RecordError
RecordType
ServerRecord
SpanKind
SpcAlertType
TimeInterval
TypeError
UtilError

Traits§

CategoricalFeatureHelpers

Functions§

async_evaluate_llm
Main orchestration function that decides which execution path to take
compute_feature_correlations
create_feature_map
evaluate_llm
Function for evaluating LLM response and generating metrics. The primary use case for evaluate_llm is to take a list of data samples, which often contain inputs and outputs from LLM systems and evaluate them against user-defined metrics in a LLM as a judge pipeline. The user is expected provide a list of dict objects and a list of LLMEval metrics. These eval metrics will be used to create a workflow, which is then executed in an async context. All eval scores are extracted and returned to the user.
flush_tracer
Helper function to force flush the tracer provider
generate_alerts
Generate alerts for each feature in the drift array
get_function_type
Function to determine if a Python function is async, async generator, or generator This is a helper util function used in tracing decorators
init_tracer
Global initialization function for the tracer. This sets up the tracer provider with the specified service name, endpoint, and sampling ratio. If no endpoint is provided, spans will be exported to stdout for debugging purposes.
shutdown_tracer
workflow_from_eval_metrics
Builds a workflow from a list of LLMEvalMetric objects