1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
//! GPU pipeline provider — extension point for platform-specific GPU residency.
//!
//! [`GpuPipelineProvider`] is the public trait that platform-specific crates
//! implement to provide GPU-resident frame delivery through GStreamer.
//! The built-in CUDA path (via `DeviceResidency::Cuda`) uses upstream
//! GStreamer CUDA elements (`cudaupload`, `cudaconvert`), available on
//! GStreamer >= 1.20 (including JetPack 6 / L4T R36). For older
//! platforms (e.g., JetPack 5.x), external crates like `nv-jetson`
//! implement this trait to provide an alternative GPU memory path.
//!
//! External crates (e.g., `nv-jetson`) implement this trait to support
//! hardware where the upstream CUDA elements are not available — for
//! example, NVMM-based GPU residency on JetPack 5.x (GStreamer 1.16).
//!
//! # GStreamer dependency
//!
//! This trait takes GStreamer types (`gstreamer::Sample`) in its method
//! signatures because it operates at the media backend boundary. External
//! crates implementing this trait explicitly opt into the `gstreamer`
//! dependency. This is the deliberate extension surface for the GStreamer
//! backend — upstream of this point, no GStreamer types are visible.
//!
//! # Pipeline topology
//!
//! The provider controls two parts of the per-feed pipeline:
//!
//! 1. **Pipeline tail** — the GStreamer elements between the decoder and
//! the appsink (`build_pipeline_tail`). For upstream CUDA this is
//! `cudaupload → cudaconvert → appsink(CUDAMemory)`. For Jetson NVMM
//! it might be `nvvidconv → appsink(NVMM)` or just `appsink(NVMM)`.
//!
//! 2. **Frame bridge** — the function that converts a `GstSample` into a
//! `FrameEnvelope` with device-resident
//! pixel data (`bridge_sample`).
//!
//! The provider controls the full pipeline tail; any decoder-to-tail
//! bridging elements should be included as the first element(s) in
//! the tail returned by `build_pipeline_tail`.
use Arc;
use AtomicU64;
use MediaError;
use FeedId;
use FrameEnvelope;
use PixelFormat;
use cratePtzTelemetry;
/// Result of [`GpuPipelineProvider::build_pipeline_tail`].
///
/// Contains the GStreamer elements that form the pipeline segment between
/// the decoder (or post-decode hook) and the appsink, plus the configured
/// appsink itself.
/// Extension point for GPU-resident pipeline construction.
///
/// Platform-specific crates implement this trait to provide tailored
/// pipeline topology and frame bridging for their GPU memory model.
///
/// The built-in CUDA path is available via `DeviceResidency::Cuda`
/// without implementing this trait. Applications only need a custom
/// provider when the built-in elements are unavailable (e.g., NVMM
/// on JetPack 5.x) or when a different GPU memory model is required.
///
/// # Thread safety
///
/// Implementations must be `Send + Sync` because the provider is shared
/// between the pipeline-building code (source management thread) and the
/// appsink callback (GStreamer streaming thread) via `Arc`.
///
/// # Example
///
/// ```rust,ignore
/// use std::sync::Arc;
/// use nv_media::gpu_provider::GpuPipelineProvider;
/// use nv_media::DeviceResidency;
///
/// let provider: Arc<dyn GpuPipelineProvider> = Arc::new(MyJetsonProvider::new());
///
/// let config = FeedConfig::builder()
/// .device_residency(DeviceResidency::Provider(provider))
/// // ...
/// .build()?;
/// ```
/// Shared handle to a [`GpuPipelineProvider`].
///
/// Used by `IngressOptions`,
/// `SessionConfig`, and the pipeline
/// builder.
pub type SharedGpuProvider = ;
// ---------------------------------------------------------------------------
// Provider authoring helpers
// ---------------------------------------------------------------------------
/// Pre-extracted metadata from a GStreamer sample.
///
/// Providers call [`SampleInfo::extract()`] at the top of their
/// [`bridge_sample`](GpuPipelineProvider::bridge_sample) implementation
/// to avoid re-deriving width/height/stride/timestamps from raw
/// GStreamer types.
///
/// # Example
///
/// ```rust,ignore
/// fn bridge_sample(
/// &self,
/// feed_id: FeedId,
/// seq: &Arc<AtomicU64>,
/// pixel_format: PixelFormat,
/// sample: &gstreamer::Sample,
/// ptz: Option<PtzTelemetry>,
/// ) -> Result<FrameEnvelope, MediaError> {
/// let info = SampleInfo::extract(sample, seq)?;
/// // … platform-specific handle extraction …
/// Ok(info.into_device_envelope(feed_id, pixel_format, handle, Some(materialize), ptz))
/// }
/// ```