Skip to main content

Crate nv_runtime

Crate nv_runtime 

Source
Expand description

§nv-runtime

Pipeline orchestration and runtime for the NextVision video perception library.

§Conceptual model

The runtime manages feeds — independent video streams, each running on a dedicated OS thread. For every frame, a linear pipeline of user-defined stages produces structured perception output.

Media source → FrameQueue → [Stage 1 → Stage 2 → …] → OutputSink
                                │                             │
                         TemporalStore               Broadcast channel
                         ViewState                   (optional subscribers)

The media backend (GStreamer, via nv-media) is an implementation detail. Users interact with backend-agnostic types: SourceSpec, FeedConfig, and OutputEnvelope. A custom backend can be injected via RuntimeBuilder::ingress_factory.

§Key types

§PTZ / view-state handling

Moving-camera feeds use CameraMode::Observed with a user-supplied ViewStateProvider. The runtime polls it each frame, runs an EpochPolicy, and manages view epochs, continuity degradation, and trajectory segmentation automatically. Fixed cameras use CameraMode::Fixed and skip the view-state machinery entirely.

§Out of scope

The runtime does not include domain-specific event taxonomies, alerting workflows, calibration semantics, or UI concerns. Those belong in layers built on top of this library.

§Minimal usage

use nv_runtime::*;
use nv_core::*;

struct MySink;
impl OutputSink for MySink {
    fn emit(&self, _output: SharedOutput) {}
}

let runtime = Runtime::builder().build()?;
let _feed = runtime.add_feed(
    FeedConfig::builder()
        .source(SourceSpec::rtsp("rtsp://cam/stream"))
        .camera_mode(CameraMode::Fixed)
        .stages(vec![Box::new(MyStage)])
        .output_sink(Box::new(MySink))
        .build()?
)?;
// runtime.shutdown();

§Batch inference across feeds

Multiple feeds can share a single GPU-accelerated batch processor via BatchHandle. Create a batch handle once, then reference it from each feed’s pipeline.

use nv_runtime::*;
use nv_core::*;
use nv_perception::batch::{BatchProcessor, BatchEntry};
use std::time::Duration;


let runtime = Runtime::builder().build()?;

// Create a shared batch coordinator.
let batch = runtime.create_batch(
    Box::new(MyDetector),
    BatchConfig {
        max_batch_size: 8,
        max_latency: Duration::from_millis(50),
        queue_capacity: None,
        response_timeout: None,
        max_in_flight_per_feed: 1,
        startup_timeout: None,
    },
)?;

// Build per-feed pipelines referencing the shared batch.
let pipeline = FeedPipeline::builder()
    .batch(batch.clone()).expect("single batch point")
    .build();

let _feed = runtime.add_feed(
    FeedConfig::builder()
        .source(SourceSpec::rtsp("rtsp://cam/stream"))
        .camera_mode(CameraMode::Fixed)
        .feed_pipeline(pipeline)
        .output_sink(Box::new(MySink))
        .build()?
)?;

Re-exports§

pub use backpressure::BackpressurePolicy;
pub use batch::BatchConfig;
pub use batch::BatchHandle;
pub use batch::BatchMetrics;
pub use diagnostics::BatchDiagnostics;
pub use diagnostics::FeedDiagnostics;
pub use diagnostics::OutputLagStatus;
pub use diagnostics::RuntimeDiagnostics;
pub use diagnostics::ViewDiagnostics;
pub use diagnostics::ViewStatus;
pub use feed::FeedConfig;
pub use feed::FeedConfigBuilder;
pub use feed_handle::DecodeStatus;
pub use feed_handle::FeedHandle;
pub use feed_handle::QueueTelemetry;
pub use output::AdmissionSummary;
pub use output::FrameInclusion;
pub use output::OutputEnvelope;
pub use output::OutputSink;
pub use output::SharedOutput;
pub use output::SinkFactory;
pub use pipeline::FeedPipeline;
pub use pipeline::FeedPipelineBuilder;
pub use pipeline::PipelineError;
pub use provenance::Provenance;
pub use provenance::StageOutcomeCategory;
pub use provenance::StageProvenance;
pub use provenance::StageResult;
pub use provenance::ViewProvenance;
pub use runtime::Runtime;
pub use runtime::RuntimeBuilder;
pub use runtime::RuntimeHandle;
pub use shutdown::RestartPolicy;
pub use shutdown::RestartTrigger;

Modules§

backpressure
Backpressure policy configuration.
batch
Shared batch coordination infrastructure.
diagnostics
Consolidated diagnostics snapshots for feeds and the runtime.
feed
Feed configuration and validation.
feed_handle
Feed handle and runtime-observable types.
output
Output types: OutputEnvelope, OutputSink trait, SinkFactory, and lag detection.
pipeline
Unified feed pipeline — stages with an optional shared batch point.
provenance
Provenance types — audit trail for stage and view-system decisions.
runtime
Runtime, runtime builder, and runtime handle.
shutdown
Shutdown and restart policy types.

Structs§

DecodeCapabilities
Lightweight capability information about the decode backend.
DecodedStreamInfo
Information about a decoded stream, provided to PostDecodeHook callbacks.

Enums§

CustomPipelinePolicy
Whether SourceSpec::Custom pipeline fragments are trusted.
DecodeOutcome
Backend-neutral classification of the effective decoder.
DecodePreference
User-facing decode preference for a feed.
DeviceResidency
Where decoded frames reside after the media bridge.
HealthEvent
A health event emitted by the runtime.
RtspSecurityPolicy
RTSP transport security policy.
ValidationMode
Controls whether StagePipeline::validate / validate_stages warnings are ignored, logged, or promoted to hard errors.
ValidationWarning
Advisory warning from StagePipeline::validate().

Traits§

GpuPipelineProvider
Extension point for GPU-resident pipeline construction.

Functions§

discover_decode_capabilities
Probe the media backend for hardware decode capabilities.

Type Aliases§

PostDecodeHook
Hook invoked once per feed when the decoded stream’s caps are known.
SharedGpuProvider
Shared handle to a GpuPipelineProvider.