pub trait AsyncNominalChannelWriterService<I> {
// Required methods
fn write_batches(
&self,
auth_: BearerToken,
request: WriteBatchesRequestExternal,
) -> impl Future<Output = Result<(), Error>> + Send;
fn write_column_batches(
&self,
auth_: BearerToken,
request: WriteColumnBatchesRequest,
) -> impl Future<Output = Result<(), Error>> + Send;
fn write_telegraf_batches(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
request: WriteTelegrafBatchesRequest,
) -> impl Future<Output = Result<(), Error>> + Send;
fn write_prometheus_batches(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
request: I,
) -> impl Future<Output = Result<(), Error>> + Send;
fn prometheus_remote_write_health_check(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
) -> impl Future<Output = Result<bool, Error>> + Send;
fn write_nominal_batches(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
request: I,
) -> impl Future<Output = Result<(), Error>> + Send;
fn write_logs(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
request: WriteLogsRequest,
) -> impl Future<Output = Result<(), Error>> + Send;
}
Expand description
Write data points to Nominal data sources.
Required Methods§
Sourcefn write_batches(
&self,
auth_: BearerToken,
request: WriteBatchesRequestExternal,
) -> impl Future<Output = Result<(), Error>> + Send
fn write_batches( &self, auth_: BearerToken, request: WriteBatchesRequestExternal, ) -> impl Future<Output = Result<(), Error>> + Send
Synchronously writes batches of records to a Nominal data source.
If the request is too large, either due to the number of individual batches (> 10) or the number of points across batches (> 500k), the request may be split up into multiple requests internally when writing to the Nominal data source. Generally, it’s advisable to limit the number of points to 50k.
Sourcefn write_column_batches(
&self,
auth_: BearerToken,
request: WriteColumnBatchesRequest,
) -> impl Future<Output = Result<(), Error>> + Send
fn write_column_batches( &self, auth_: BearerToken, request: WriteColumnBatchesRequest, ) -> impl Future<Output = Result<(), Error>> + Send
Synchronously writes batches of columns of data to a Nominal data source.
This is a column-major variant of writeBatches (which is row-major) to optimize serialization and compression time for client applications streaming large numbers of points from a single column at a time. This has the tradeoff of slightly larger sizes post-gzipping of requests, so should be used in the particular case where the main bottleneck is in encoding columnar data into the row-based format found in writeBatches.
Sourcefn write_telegraf_batches(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
request: WriteTelegrafBatchesRequest,
) -> impl Future<Output = Result<(), Error>> + Send
fn write_telegraf_batches( &self, auth_: BearerToken, data_source_rid: NominalDataSourceOrDatasetRid, request: WriteTelegrafBatchesRequest, ) -> impl Future<Output = Result<(), Error>> + Send
Synchronously writes batches of records to a Nominal data source.
Has the same functionality as writeBatches, but is compatible with the Telegraf output format. Assumes that the Telegraf batch format is used. Timestamp is assumed to be in nanoseconds. The URL in the Telegraf output plugin configuration should be the fully qualified URL, including the dataSourceRid query parameter.
Sourcefn write_prometheus_batches(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
request: I,
) -> impl Future<Output = Result<(), Error>> + Send
fn write_prometheus_batches( &self, auth_: BearerToken, data_source_rid: NominalDataSourceOrDatasetRid, request: I, ) -> impl Future<Output = Result<(), Error>> + Send
Synchronously writes batches of records to a Nominal data source.
Has the same functionality as writeBatches, but is encoded using the Prometheus remote write format. We follow the specification defined here: https://prometheus.io/docs/specs/remote_write_spec/ There are a few notable caveats:
- Must be content encoded as application/x-protobuf
- Must be compressed using snappy compression
Sourcefn prometheus_remote_write_health_check(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
) -> impl Future<Output = Result<bool, Error>> + Send
fn prometheus_remote_write_health_check( &self, auth_: BearerToken, data_source_rid: NominalDataSourceOrDatasetRid, ) -> impl Future<Output = Result<bool, Error>> + Send
Performs a health check for prometheus remote write Vector sink. All this endpoint does is verify if the caller is authenticated and the server is online. Once Vector allows the Prometheus remote write endpoint to configure the healthcheck url, we can remove this endpoint.
See: https://github.com/vectordotdev/vector/issues/8279
Sourcefn write_nominal_batches(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
request: I,
) -> impl Future<Output = Result<(), Error>> + Send
fn write_nominal_batches( &self, auth_: BearerToken, data_source_rid: NominalDataSourceOrDatasetRid, request: I, ) -> impl Future<Output = Result<(), Error>> + Send
Synchronously writes a Nominal Write Request to a Nominal data source using the NominalWrite Protobuf schema. The request must be Protobuf-encoded and accompanied by the appropriate content encoding headers if compressed.
The request should follow this Protobuf schema:
message WriteRequestNominal {
repeated Series series = 1;
}
message Series {
Channel channel = 1;
map<string, string> tags = 2; // Key-value pairs for series tags
Points points = 3; // Contains either double or string points
}
message Channel {
string name = 1;
}
message Points {
oneof points_type {
DoublePoints double_points = 1;
StringPoints string_points = 2;
}
}
message DoublePoints {
repeated DoublePoint points = 1;
}
message StringPoints {
repeated StringPoint points = 1;
}
message DoublePoint {
google.protobuf.Timestamp timestamp = 1;
double value = 2;
}
message StringPoint {
google.protobuf.Timestamp timestamp = 1;
string value = 2;
}
Each request can contain multiple series, where each series consists of:
- A channel name
- A map of tags (key-value pairs)
- A collection of points, which can be either double or string values
- Each point includes a timestamp (using google.protobuf.Timestamp) and its value
The endpoint requires the Content-Type header to be set to “application/x-protobuf”. If the payload is compressed, the appropriate Content-Encoding header must be included.
Sourcefn write_logs(
&self,
auth_: BearerToken,
data_source_rid: NominalDataSourceOrDatasetRid,
request: WriteLogsRequest,
) -> impl Future<Output = Result<(), Error>> + Send
fn write_logs( &self, auth_: BearerToken, data_source_rid: NominalDataSourceOrDatasetRid, request: WriteLogsRequest, ) -> impl Future<Output = Result<(), Error>> + Send
Synchronously writes logs to a Nominal data source.
Dyn Compatibility§
This trait is not dyn compatible.
In older versions of Rust, dyn compatibility was called "object safety", so this trait is not object safe.