pub struct NominalChannelWriterServiceClient<T>(/* private fields */);
Expand description
Write data points to Nominal data sources.
Implementations§
Source§impl<T> NominalChannelWriterServiceClient<T>where
T: Client,
impl<T> NominalChannelWriterServiceClient<T>where
T: Client,
Sourcepub fn write_batches(
&self,
auth_: &BearerToken,
request: &WriteBatchesRequestExternal,
) -> Result<(), Error>
pub fn write_batches( &self, auth_: &BearerToken, request: &WriteBatchesRequestExternal, ) -> Result<(), Error>
Synchronously writes batches of records to a Nominal data source.
If the request is too large, either due to the number of individual batches (> 10) or the number of points across batches (> 500k), the request may be split up into multiple requests internally when writing to the Nominal data source. Generally, it’s advisable to limit the number of points to 50k.
Sourcepub fn write_column_batches(
&self,
auth_: &BearerToken,
request: &WriteColumnBatchesRequest,
) -> Result<(), Error>
pub fn write_column_batches( &self, auth_: &BearerToken, request: &WriteColumnBatchesRequest, ) -> Result<(), Error>
Synchronously writes batches of columns of data to a Nominal data source.
This is a column-major variant of writeBatches (which is row-major) to optimize serialization and compression time for client applications streaming large numbers of points from a single column at a time. This has the tradeoff of slightly larger sizes post-gzipping of requests, so should be used in the particular case where the main bottleneck is in encoding columnar data into the row-based format found in writeBatches.
Sourcepub fn write_telegraf_batches(
&self,
auth_: &BearerToken,
data_source_rid: &NominalDataSourceOrDatasetRid,
request: &WriteTelegrafBatchesRequest,
) -> Result<(), Error>
pub fn write_telegraf_batches( &self, auth_: &BearerToken, data_source_rid: &NominalDataSourceOrDatasetRid, request: &WriteTelegrafBatchesRequest, ) -> Result<(), Error>
Synchronously writes batches of records to a Nominal data source.
Has the same functionality as writeBatches, but is compatible with the Telegraf output format. Assumes that the Telegraf batch format is used. Timestamp is assumed to be in nanoseconds. The URL in the Telegraf output plugin configuration should be the fully qualified URL, including the dataSourceRid query parameter.
Sourcepub fn write_prometheus_batches<U>(
&self,
auth_: &BearerToken,
data_source_rid: &NominalDataSourceOrDatasetRid,
request: U,
) -> Result<(), Error>where
U: WriteBody<T::BodyWriter>,
pub fn write_prometheus_batches<U>(
&self,
auth_: &BearerToken,
data_source_rid: &NominalDataSourceOrDatasetRid,
request: U,
) -> Result<(), Error>where
U: WriteBody<T::BodyWriter>,
Synchronously writes batches of records to a Nominal data source.
Has the same functionality as writeBatches, but is encoded using the Prometheus remote write format. We follow the specification defined here: https://prometheus.io/docs/specs/remote_write_spec/ There are a few notable caveats:
- Must be content encoded as application/x-protobuf
- Must be compressed using snappy compression
Sourcepub fn prometheus_remote_write_health_check(
&self,
auth_: &BearerToken,
data_source_rid: &NominalDataSourceOrDatasetRid,
) -> Result<bool, Error>
pub fn prometheus_remote_write_health_check( &self, auth_: &BearerToken, data_source_rid: &NominalDataSourceOrDatasetRid, ) -> Result<bool, Error>
Performs a health check for prometheus remote write Vector sink. All this endpoint does is verify if the caller is authenticated and the server is online. Once Vector allows the Prometheus remote write endpoint to configure the healthcheck url, we can remove this endpoint.
See: https://github.com/vectordotdev/vector/issues/8279
Sourcepub fn write_nominal_batches<U>(
&self,
auth_: &BearerToken,
data_source_rid: &NominalDataSourceOrDatasetRid,
request: U,
) -> Result<(), Error>where
U: WriteBody<T::BodyWriter>,
pub fn write_nominal_batches<U>(
&self,
auth_: &BearerToken,
data_source_rid: &NominalDataSourceOrDatasetRid,
request: U,
) -> Result<(), Error>where
U: WriteBody<T::BodyWriter>,
Synchronously writes a Nominal Write Request to a Nominal data source using the NominalWrite Protobuf schema. The request must be Protobuf-encoded and accompanied by the appropriate content encoding headers if compressed.
The request should follow this Protobuf schema:
message WriteRequestNominal {
repeated Series series = 1;
}
message Series {
Channel channel = 1;
map<string, string> tags = 2; // Key-value pairs for series tags
Points points = 3; // Contains either double or string points
}
message Channel {
string name = 1;
}
message Points {
oneof points_type {
DoublePoints double_points = 1;
StringPoints string_points = 2;
}
}
message DoublePoints {
repeated DoublePoint points = 1;
}
message StringPoints {
repeated StringPoint points = 1;
}
message DoublePoint {
google.protobuf.Timestamp timestamp = 1;
double value = 2;
}
message StringPoint {
google.protobuf.Timestamp timestamp = 1;
string value = 2;
}
Each request can contain multiple series, where each series consists of:
- A channel name
- A map of tags (key-value pairs)
- A collection of points, which can be either double or string values
- Each point includes a timestamp (using google.protobuf.Timestamp) and its value
The endpoint requires the Content-Type header to be set to “application/x-protobuf”. If the payload is compressed, the appropriate Content-Encoding header must be included.
Sourcepub fn write_logs(
&self,
auth_: &BearerToken,
data_source_rid: &NominalDataSourceOrDatasetRid,
request: &WriteLogsRequest,
) -> Result<(), Error>
pub fn write_logs( &self, auth_: &BearerToken, data_source_rid: &NominalDataSourceOrDatasetRid, request: &WriteLogsRequest, ) -> Result<(), Error>
Synchronously writes logs to a Nominal data source.
Trait Implementations§
Source§impl<T: Clone> Clone for NominalChannelWriterServiceClient<T>
impl<T: Clone> Clone for NominalChannelWriterServiceClient<T>
Source§fn clone(&self) -> NominalChannelWriterServiceClient<T>
fn clone(&self) -> NominalChannelWriterServiceClient<T>
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read moreSource§impl<T: Debug> Debug for NominalChannelWriterServiceClient<T>
impl<T: Debug> Debug for NominalChannelWriterServiceClient<T>
Auto Trait Implementations§
impl<T> Freeze for NominalChannelWriterServiceClient<T>where
T: Freeze,
impl<T> RefUnwindSafe for NominalChannelWriterServiceClient<T>where
T: RefUnwindSafe,
impl<T> Send for NominalChannelWriterServiceClient<T>where
T: Send,
impl<T> Sync for NominalChannelWriterServiceClient<T>where
T: Sync,
impl<T> Unpin for NominalChannelWriterServiceClient<T>where
T: Unpin,
impl<T> UnwindSafe for NominalChannelWriterServiceClient<T>where
T: UnwindSafe,
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> CloneToUninit for Twhere
T: Clone,
impl<T> CloneToUninit for Twhere
T: Clone,
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoRequest<T> for T
impl<T> IntoRequest<T> for T
Source§fn into_request(self) -> Request<T>
fn into_request(self) -> Request<T>
T
in a tonic::Request