Crate api_openai

Crate api_openai 

Source
Expand description

This is a library for interacting with the OpenAI API. It provides a client for various OpenAI services, including assistants, chat, embeddings, files, fine-tuning, images, models, moderations, realtime, responses, and vector stores.

§Governing Principle : “Thin Client, Rich API”

This library follows the principle of “Thin Client, Rich API” - exposing all server-side functionality transparently while maintaining zero client-side intelligence or automatic behaviors.

Key Distinction: The principle prohibits automatic/implicit behaviors but explicitly allows and encourages explicit/configurable enterprise reliability features.

§Core Principles

  • API Transparency: One-to-one mapping with OpenAI API endpoints
  • Zero Automatic Behavior: No implicit decision-making or magic thresholds
  • Explicit Control: Developer decides when, how, and why operations occur
  • Information vs Action: Clear separation between data retrieval and state changes
  • Configurable Reliability: Enterprise features available through explicit configuration

§Enterprise Reliability Features

The following enterprise reliability features are explicitly allowed when implemented with explicit configuration and transparent operation:

  • Configurable Retry Logic: Exponential backoff with explicit configuration
  • Circuit Breaker Pattern: Failure threshold management with transparent state
  • Rate Limiting: Request throttling with explicit rate configuration
  • Failover Support: Multi-endpoint configuration and automatic switching
  • Health Checks: Periodic endpoint health verification and monitoring

§State Management Policy

✅ ALLOWED: Runtime-Stateful, Process-Stateless

  • Connection pools, circuit breaker state, rate limiting buckets
  • Retry logic state, failover state, health check state
  • Runtime state that dies with the process
  • No persistent storage or cross-process state

❌ PROHIBITED: Process-Persistent State

  • File storage, databases, configuration accumulation
  • State that survives process restarts

Implementation Requirements:

  • Feature gating behind cargo features (retry, circuit_breaker, rate_limiting, failover, health_checks)
  • Explicit configuration required (no automatic enabling)
  • Transparent method naming (e.g., execute_with_retries(), execute_with_circuit_breaker())
  • Zero overhead when features disabled

This design ensures predictable behavior, explicit control, and transparency for developers using the library.

Re-exports§

pub use client_api_accessors::ClientApiAccessors;
pub use super::TimePeriod;
pub use super::TimeSeriesPoint;
pub use super::CostTrendPoint;
pub use super::UsageSummary;
pub use super::CostBreakdown;
pub use super::EnterpriseClient;
pub use super::cost_management::*;
pub use super::region_management::*;
pub use super::quota_management::*;

Modules§

admin
Administrative APIs Module
administration_shared
Structures shared across the Administration API endpoints (Users, Projects, Invites, API Keys, Rate Limits).
advanced_auth
Advanced Authentication Module
assistants
This module defines the Assistants API client, which provides methods for interacting with the OpenAI Assistants API.
assistants_shared
Structures shared across the Assistants API, including Assistants, Threads, Messages, Runs, and Steps.
audio
This module defines the Audio API client, which provides methods for interacting with the OpenAI Audio API.
audit_logs_shared
Structures related to Audit Logs API endpoints.
batch_shared
Structures shared across the Batch API.
buffered_streaming
Buffered Streaming for Smoother UX
builder_enhancements
Builder Pattern Enhancements
chat
This module defines the Chat API client, which provides methods for interacting with the OpenAI Chat API.
chat_shared
This module defines shared data structures and components used across various OpenAI chat-related API endpoints. It includes definitions for chat completion requests, messages, content parts, and tool-related structures.
client
This module defines the Client structure for interacting with the OpenAI API. It provides methods for making various API requests, handling authentication, and managing HTTP communication.
client_api_accessors
API Accessor Methods for OpenAI Client
common
Defines common data structures (components) used across various OpenAI API responses and requests. Based on the components/schemas section of the OpenAPI specification.
completions_legacy
Structures related to the legacy Completions API.
components
This module defines shared data structures and components used across various OpenAI API groups. It includes common types for requests, responses, and specific components like chat, audio, and image-related structures.
connection_manager
Advanced HTTP Connection Management System
curl_generation
cURL generation functionality for OpenAI API requests.
diagnostics
General diagnostics functionality for monitoring API requests, performance, and errors.
dynamic_configuration
Dynamic Configuration Module
embeddings
This module defines the Embeddings API client, which provides methods for interacting with the OpenAI Embeddings API.
embeddings_request
Request structures for embeddings API
enhanced_batch_operations
Enhanced Batch Operations Module
enhanced_circuit_breaker
Enhanced Circuit Breaker Module
enhanced_client
Enhanced OpenAI Client with Advanced Connection Management
enhanced_client_builder
Builder for Enhanced OpenAI Client Configuration
enhanced_client_performance
Performance Monitoring and Analysis for Enhanced OpenAI Client
enhanced_embeddings
Enhanced Embeddings Client with Intelligent Batching
enhanced_rate_limiting
Enhanced Rate Limiting Module
enhanced_retry
Enhanced Retry Logic Module
enterprise
Enterprise Module
environment
This module defines the environment configuration for the OpenAI API client. It includes traits and concrete implementations for managing API keys, organization IDs, project IDs, and base URLs.
error
This module defines the error types for the OpenAI API client. It includes a comprehensive OpenAIError enum that covers various error scenarios, such as API errors, network issues, and serialization failures.
exposed
Exposed namespace of the module.
failover
Failover Module
files
This module defines the Files API client, which provides methods for interacting with the OpenAI Files API.
fine_tuning
This module defines the FineTuning API client, which provides methods for interacting with the OpenAI Fine-tuning API.
fine_tuning_shared
Structures shared across the Fine-tuning API, including jobs, checkpoints, and events.
health_checks
Health Check Module
images
This module defines the Images API client, which provides methods for interacting with the OpenAI Images API.
input
Structures related to input content parts and messages.
input_validation
Input validation module for OpenAI API requests
metrics_framework
Metrics Collection Framework
model_comparison
Model Comparison for A/B Testing
model_deployment
Model Deployment Module
model_tuning
Model Tuning Module
models
This module defines the Models API client, which provides methods for interacting with the OpenAI Models API.
moderations
This module defines the Moderations API client, which provides methods for interacting with the OpenAI Moderation API.
orphan
Orphan namespace of the module.
output
Structures related to output items generated by the model, such as messages, tool calls, and annotations.
own
Own namespace of the module.
performance_cache
High-Performance Caching System
performance_monitoring
Performance Monitoring Module
platform_specific
Platform-specific features and integrations.
prelude
Prelude to use essentials: use my_module ::prelude :: *.
query
Defines common query parameters used for listing resources (pagination, sorting).
realtime
This module defines the Realtime API client, which provides methods for interacting with the OpenAI Realtime API.
realtime_shared
Structures shared across the Realtime API for session management and event handling.
request_batching
Intelligent Request Batching System
request_cache
Request caching functionality for OpenAI API client.
request_cache_enhanced
Enhanced Request Caching
request_templates
Request Templates for Common Use Cases
request_validation
Validate trait implementations for OpenAI request types
response_cache
Response Cache Module
responses
This module defines the Responses API client, which provides methods for interacting with the OpenAI Responses API.
secret
This module defines the Secret type for handling sensitive information like API keys. It wraps a string and ensures that the secret is not accidentally exposed in debug output or logs.
streaming_control
Streaming Control Module
streaming_performance_enhanced
Enhanced Streaming Performance Module
sync
Synchronous API wrapper for the OpenAI client.
tools
This module defines various tool-related structures used across the OpenAI API.
uploads
Uploads Module
usage_shared
Structures related to API Usage and Costs endpoints.
validators
Validators module containing reusable validation functions
vector_stores
This module defines the serde_json::Values API client, which provides methods for interacting with the OpenAI Vector Stores API.
vector_stores_shared
Structures related to Vector Stores, including files, batches, and search results.
websocket_reliability_enhanced
Enhanced WebSocket Reliability Module
websocket_streaming
WebSocket Streaming Module

Structs§

AccessPattern
Access pattern analysis for adaptive behavior
Admin
Administrative API client
AdvancedAuthConfig
Advanced authentication configuration
AdvancedAuthManager
Advanced authentication manager
ApiConnectorConfig
Configuration for third-party API connectors.
ApiError
Represents an error returned by the OpenAI API. Corresponds to the Error schema in the OpenAPI spec.
ApiErrorWrap
A wrapper for ApiError that includes the HTTP status code.
Assistants
The client for the OpenAI Assistants API.
Audio
The client for the OpenAI Audio API.
AuthAuditEntry
Authentication audit log entry
AuthPerformanceMetrics
Authentication performance metrics
AuthSession
Authentication session information
AutoScalingConfig
Auto-scaling configuration
BatchConfig
Configuration for request batching behavior
BatchJobConfig
Configuration for batch job creation
BatchMetrics
Batching performance metrics
BatchOptimizer
Batch processing optimization strategies
BatchProcessingMetrics
Metrics for batch processing performance
BatchRecommended
Recommended configuration values for enhanced batch operations following “Thin Client, Rich API” principles.
BatchResult
Batch processing result
BatchRetryConfig
Configuration for batch request retry behavior
BatchedRequest
Batched request container
BatchingAnalysis
Embedding batching analysis results Embedding batching analysis results Analysis of batching potential
BrowsingMetadata
Metadata about browsing operation.
BrowsingResult
Result of web browsing operation.
BufferConfig
Configuration for buffered streaming
BufferedMessage
Buffered message for reliable delivery
BufferedStream
Buffered stream wrapper
CacheConfig
Cache configuration with performance optimizations
CacheConfig
Configuration for request caching behavior.
CacheConfig
Configuration for response caching behavior
CacheEntry
A cache entry containing the cached value with metadata.
CacheEntry
Cached response entry with metadata
CacheKey
Cache key with smart hashing for optimal performance
CacheKey
Cache key generation and management
CacheMetrics
Cache performance metrics
CacheMetrics
Cache performance metrics
CacheStatistics
Statistics for cache performance monitoring.
CacheStatistics
Cache statistics for monitoring and analysis
CachedClient
Cache-aware HTTP client wrapper
CachedResponse
Cached response with metadata
CancellationToken
Cancellation token for controlling streaming operations
Chat
The client for the OpenAI Chat API.
CheckpointConfig
Checkpointing configuration
CircuitBreakerMetrics
Circuit breaker metrics
CircuitBreakerStateManager
Circuit breaker state management
Client
The main client for interacting with the OpenAI API.
CodeExecutionConfig
Configuration for code execution environment.
CodeExecutionResult
Result of code execution.
ComparisonResults
Results from comparing multiple models
CompressedEvent
Compressed event for efficient storage and transmission
ConfigChangeEvent
Configuration change event
ConfigChangeReceiver
Receiver for configuration change events
ConfigChangeSender
Sender for configuration change events
ConfigManager
Configuration manager for stateless configuration operations
ConfigSnapshot
Configuration snapshot representing a point-in-time configuration state
ConfigValidator
Configuration validator
ConnectionConfig
Configuration for advanced connection management
ConnectionEfficiencyMetrics
Overall connection efficiency metrics
ConnectionGuard
RAII guard for connection pool connections
ConnectionManager
Global connection manager
ConnectionMetrics
Connection performance metrics
ConnectionPerformanceReport
Connection performance metrics for monitoring
ConnectionPoolStats
Connection pool statistics
ConnectionStats
Connection usage statistics
CreateProjectRequest
Request to create a new project
CurlConnectionOptions
Connection and security options for cURL commands
CurlFormatOptions
Options for formatting cURL commands
CurlFormattingOptions
Formatting-related options for cURL commands
CurlGenerator
Main cURL generation utility
CurlRequest
Represents a cURL request with all necessary information
CurlRequestBuilder
Builder for constructing cURL requests
DeleteFileResponse
Response when deleting a file
DeleteResponse
Generic delete response
DeploymentConfig
Model deployment configuration
DeploymentEvent
Deployment event for history tracking
DeploymentEventReceiver
Receiver for deployment events
DeploymentEventSender
Sender for deployment events
DeploymentManager
Deployment manager for orchestrating model deployments
DeploymentManagerConfig
Configuration for deployment manager
DeploymentStats
Deployment statistics
DiagnosticsCollectionConfig
Configuration for what data to collect
DiagnosticsCollector
Main diagnostics collector
DiagnosticsConfig
Configuration for diagnostics collection behavior
DiagnosticsPerformanceConfig
Configuration for performance metrics
DiagnosticsReport
Comprehensive diagnostics report
DynamicConfigManager
Utilities for dynamic configuration management
EmbeddingBatchProcessor
Smart embedding processing strategies
Embeddings
The client for the OpenAI Embeddings API.
EnhancedBatchRequest
Enhanced batch request with priority and retry configuration
EnhancedCacheConfig
Advanced cache configuration with production-ready features
EnhancedCacheEntry
Enhanced cache entry with rich metadata
EnhancedCacheStatistics
Enhanced cache statistics with detailed metrics
EnhancedCircuitBreaker
Enhanced circuit breaker executor
EnhancedCircuitBreakerConfig
Enhanced circuit breaker configuration
EnhancedClient
Enhanced OpenAI client with comprehensive reliability features
EnhancedClientBuilder
Builder for enhanced client configuration
EnhancedEmbeddings
Enhanced embeddings client with intelligent batching
EnhancedEmbeddingsConfig
Configuration for enhanced embeddings
EnhancedRateLimiter
Enhanced rate limiter executor
EnhancedRateLimitingConfig
Enhanced rate limiting configuration
EnhancedRequestCache
Enhanced request cache with advanced features
EnhancedRetryConfig
Enhanced retry configuration for HTTP requests
EnhancedRetryExecutor
Enhanced retry executor with comprehensive retry logic
ErrorMetrics
Metrics for tracking errors
ErrorMetrics
Error tracking metrics
EventBatch
Buffered event batch for efficient processing
FailoverConfig
Failover configuration and policy
FailoverContext
Failover context representing the current state of a failover attempt
FailoverEndpoint
Endpoint configuration for failover
FailoverEventReceiver
Receiver for failover events
FailoverEventSender
Sender for failover events
FailoverExecutor
Failover execution utilities
FailoverManager
Failover manager for endpoint selection and health tracking
FileObject
File object returned by the OpenAI Files API
Files
The client for the OpenAI Files API.
FineTuning
The client for the OpenAI Fine-tuning API.
GroundedResponse
Response from search grounding operation.
HealthCheckConfig
Configuration for health checks
HealthCheckConfig
Health check configuration for deployments
HealthCheckResult
Health check result for a single endpoint
HealthChecker
Stateless health check utilities
HostConnectionPool
Connection pool manager for a specific host
HyperParameters
Training hyperparameters
ImageGenerationConfig
Configuration for image generation operations.
ImageMetadata
Metadata about generated image.
ImageResult
Result of image generation/manipulation.
Images
The client for the OpenAI Images API.
Invite
Invite entity for user invitations
ListFilesResponse
List files response
ListResponse
List response wrapper
ManagedConnection
Enhanced HTTP client with advanced connection management
MemoryUsageReport
Memory usage report structure
MetricsAggregation
Metrics aggregation statistics
MetricsAnalysisReport
Comprehensive metrics analysis report
MetricsCollector
Central metrics collector and analyzer
MetricsConfig
Configuration for metrics collection behavior
MetricsDataPoint
Time-series data point for metrics
MetricsSnapshot
Comprehensive metrics snapshot
ModelCheckpoint
Model checkpoint information
ModelComparator
Model comparator for A/B testing
ModelComparisonResult
Result from comparing a single model
ModelDeployment
Model deployment instance
ModelDeploymentUtils
Model deployment utilities
ModelTuningUtils
Model tuning utilities
Models
The client for the OpenAI Models API.
Moderations
The client for the OpenAI Moderation API.
MultiTenantConfig
Multi-tenant authentication configuration
OAuthTokenResponse
OAuth token response from authentication server
OpenAIRecommended
Recommended configuration values for OpenAI API client following “Thin Client, Rich API” principles.
OpenaiEnvironmentImpl
Concrete implementation of OpenaiEnvironment.
Organization
Organization entity
OrganizationSettings
Organization settings
OrganizationUpdate
Request to update organization
ParameterDefinition
Definition of a single parameter.
PerformanceAnalysis
Analysis of connection performance
PerformanceCache
High-performance cache implementation
PerformanceConfig
Performance monitoring configuration
PerformanceMetrics
Aggregated performance metrics
PerformanceMonitor
Performance monitoring context
PoolStatistics
Statistics for a connection pool
ProcessingStats
Processing statistics
Project
Project entity
ProjectUpdate
Request to update project
RateLimitConfig
Rate limiting configuration.
Realtime
The client for the OpenAI Realtime API.
RegressionReport
Performance regression detection report
ReliableWebSocketSession
Enhanced WebSocket session with reliability features
RequestBatcher
Intelligent request batcher
RequestCache
Thread-safe request cache with TTL and LRU eviction.
RequestCacheKey
Key used for caching requests based on endpoint, method, and content.
RequestCacheKey
Re-export RequestCacheKey from original implementation
RequestMetrics
Metrics for a single request
RequestResponseMetrics
Combined request/response metrics
RequestSignature
Request signature for batching similarity detection
RequestTemplate
Request template for common use cases
ResourceRequirements
Resource requirements for deployment
ResponseCache
Advanced response cache with TTL and intelligent management
ResponseMetrics
Metrics for a single response
Responses
The client for the OpenAI Responses API.
RetryConfig
Retry configuration.
RetryState
Thread-safe retry state management
SearchGroundingConfig
Configuration for search grounding operations.
SearchMetadata
Metadata about search operation.
SearchSource
Individual search result source.
Secret
Represents a secret string, such as an API key. It wraps secrecy::SecretString to prevent accidental exposure.
SlidingWindowState
Sliding window rate limiter state
StreamConfig
Configuration for synchronous streaming operations.
StreamControl
Stream control handle for managing streaming operations
StreamControlConfig
Configuration for streaming control behavior
StreamControlManager
Stream control utilities
StreamControlReceiver
Receiver for stream control commands
StreamControlSender
Sender for stream control commands
StreamingBuffer
Enhanced streaming buffer with performance optimizations
StreamingConnectionPool
Connection pool for streaming endpoints
StreamingPerformanceConfig
Configuration for streaming performance optimizations
StreamingPerformanceStats
Statistics for streaming performance monitoring
StreamingProcessor
Enhanced streaming processor with performance optimizations
SyncChat
Synchronous wrapper for the chat API.
SyncClient
Synchronous wrapper around the async OpenAI client.
SyncEmbeddings
Synchronous wrapper for the embeddings API.
SyncModels
Synchronous wrapper for the models API.
SyncStreamIterator
Synchronous iterator that bridges async receivers to sync iteration.
ThroughputMetrics
Throughput metrics under load
TimingMetrics
Request timing metrics
TokenBucketState
Token bucket rate limiter state
ToolParameters
Tool parameter definitions.
ToolResult
Result of tool execution.
TrainingDataConfig
Training data configuration
TrainingMetrics
Training progress metrics
TuningEvent
Tuning event for logging
TuningEventReceiver
Tuning event receiver
TuningEventSender
Tuning event sender
TuningJob
Model tuning job instance
TuningJobConfig
Model tuning job configuration
TuningManager
Model tuning manager
TuningNotification
Tuning event notification
TuningResourceRequirements
Resource requirements for training
TuningStats
Tuning statistics
UnifiedPerformanceDashboard
Unified performance dashboard combining all components
UploadConfig
File upload configuration
Uploads
Uploads API implementation
UsageLimits
Usage limits for organization
User
User entity within organization
ValidationError
Validation error containing detailed information about what failed
VectorStores
The client for the OpenAI Vector Stores API.
WebBrowsingConfig
Configuration for web browsing operations.
WebSocketConfig
WebSocket connection configuration
WebSocketConnection
WebSocket connection handle
WebSocketConnectionStats
WebSocket connection statistics for reliability monitoring
WebSocketConnectionStats
WebSocket connection statistics
WebSocketEventReceiver
Receiver for WebSocket events
WebSocketEventSender
Sender for WebSocket events
WebSocketMessageReceiver
Receiver for WebSocket messages
WebSocketMessageSender
Sender for WebSocket messages
WebSocketPool
WebSocket connection pool for managing multiple connections
WebSocketPoolConfig
Configuration for WebSocket connection pool
WebSocketReliabilityConfig
Configuration for WebSocket reliability features
WebSocketStreamer
WebSocket streaming utilities
WsSession
A WebSocket session client for the OpenAI Realtime API.

Enums§

ApiAuthentication
Authentication methods for third-party APIs.
BatchRequestPriority
Priority levels for batch request processing
CachePriority
Cache entry priority levels
CircuitBreakerState
Circuit breaker state enumeration
CodeRuntime
Available code runtime environments.
ComparisonMode
Comparison mode for model testing
ConfigValue
Configuration value that can be dynamically updated
ConnectionHealth
Connection health status
ConnectionState
Connection state for reliability tracking
DeploymentEventType
Types of deployment events
DeploymentNotification
Deployment notification types
DeploymentStatus
Model deployment status
DeploymentStrategy
Deployment strategy type
EndpointHealth
Endpoint health status
EvictionPolicy
Cache eviction policies
FailoverError
Failover error types
FailoverEvent
Failover event types
FailoverStrategy
Failover strategy for selecting endpoints
FileStatus
Status of a file in the OpenAI system
FineTuningMethod
Parameter-efficient fine-tuning method
HandlerMessage
Represents a message handled by the WebSocket session.
HealthCheckStrategy
Health check strategy
HealthStatus
Health status for an endpoint
ImageModel
Available image generation models.
ImageQuality
Image quality options.
ImageResponseFormat
Image response format.
ImageSize
Image size options.
ImageStyle
Image style options.
InviteStatus
Invite status enumeration
OpenAIError
Represents all possible errors that can occur when interacting with the OpenAI API.
ProjectStatus
Project status enumeration
RateLimitingAlgorithm
Rate limiting algorithm enumeration
ResponseRequestValidationError
Validation errors for CreateResponseRequest builders
SearchEngine
Available search engines for grounding.
SecurityLevel
Security levels for code execution.
StreamControlCommand
Commands for controlling stream operations
StreamState
Stream control state for tracking operations
TrainingObjective
Training objective type
TuningEventType
Types of tuning events
TuningStatus
Fine-tuning job status
UserRole
User roles within organization
ValidationRule
Configuration validation rule
WebSocketEvent
WebSocket event types
WebSocketMessage
WebSocket message type
WebSocketState
WebSocket connection state

Constants§

OPENAI_BETA_HEADER
OpenAI Beta header for API requests.

Traits§

ApiConnector
Trait for third-party API connectors.
ClientApiAccessors
Extension trait providing API accessor methods for Client
CreateResponseRequestEnhancements
Enhanced builder methods for CreateResponseRequest
CurlGeneration
Trait for API clients to support cURL generation
CustomTool
Trait for custom tools that can be integrated.
EnvironmentInterface
A trait defining the interface for environment-related information.
FunctionToolEnhancements
Enhanced builder methods for FunctionTool
InputMessageEnhancements
Enhanced builder methods for InputMessage
OpenaiEnvironment
A trait defining the interface for OpenAI environment configuration.
PlatformSpecificClient
Extension trait for platform-specific client methods.
StreamBufferExt
Extension trait for streams to add buffering
Validate
Trait for validating request parameters before API calls

Functions§

aggregate_batch_results
Aggregate and analyze batch results
analyze_embedding_batching_potential
Analyze batching potential for given requests (standalone function)
analyze_performance
Analyze connection performance and provide recommendations
build_curl_request
Helper function to build a cURL request from HTTP request components
cancel_batch_job
Cancel a batch job with enhanced cleanup
configure_performance_monitoring
Set custom performance monitor configuration
configure_streaming_processor
Configure the global streaming processor
create_batch_job
Enhanced batch job creation with priority and retry configuration
create_failover_client
Convenience function : Create failover client using global manager
create_oauth_client
Convenience function : Create OAuth client using global manager
create_reliable_session
Create a reliable WebSocket session with global configuration
create_reliable_session_with_config
Create a reliable WebSocket session with custom configuration
create_tenant_client
Convenience function : Create tenant client using global manager
detect_performance_regression
Detect performance regression using global monitor
flush_events
Flush buffered events from the global processor
get_advanced_auth_manager
Get reference to global advanced authentication manager
get_batch_status
Get enhanced batch status with progress tracking
get_buffer_stats
Get buffer statistics from the global processor
get_connection_stats
Get connection statistics from the global processor
get_global_config
Get the global WebSocket reliability configuration
get_performance_monitor
Get the global performance monitor instance
get_streaming_processor
Get the global streaming processor instance
initialize_advanced_auth
Initialize global advanced authentication manager
list_batch_jobs
List batch jobs with enhanced filtering and pagination
map_deserialization_error
Helper function to map serde_json::Error to OpenAIError::Internal.
measure_concurrent_performance
Measure concurrent performance using global monitor
measure_overhead_consistency
Measure overhead consistency using global monitor
measure_request_overhead
Convenience functions for global performance monitoring
measure_throughput_under_load
Measure throughput under load using global monitor
monitor_batch_with_webhooks
Monitor batch progress with webhook notifications
monitor_memory_usage
Monitor memory usage using global monitor
optimize_and_chunk_batch
Optimize and chunk batch requests for better performance
optimize_batch_performance
Optimize batch performance with advanced algorithms
process_batch
Process multiple events using the global processor
process_batch_with_caching
Process batch with intelligent caching and deduplication
process_concurrent_batches
Process multiple batches concurrently with rate limiting
process_enhanced_batch
Process enhanced batch with priority handling and advanced features
process_event
Convenience functions for global streaming processing
retry_failed_batch
Retry failed batch with enhanced error recovery
role_level
Get role hierarchy level for comparison
serialize_request_to_json
Helper function to serialize a request to JSON
set_global_config
Set the global WebSocket reliability configuration
validate_permission
Check if a user has sufficient permissions for an operation

Type Aliases§

ApiRequestCache
Cache implementation specifically for API requests and responses.
EnhancedApiRequestCache
Type alias for enhanced API request cache
Result
Type alias for Results using error_tools pattern