pub struct Client { /* private fields */ }Expand description
Client for Feldera API
With Feldera, users create data pipelines out of SQL programs. A SQL program comprises tables and views, and includes as well the definition of input and output connectors for each respectively. A connector defines a data source or data sink to feed input data into tables or receive output data computed by the views respectively.
§Pipeline
The API is centered around the pipeline, which most importantly consists out of the SQL program, but also has accompanying metadata and configuration parameters (e.g., compilation profile, number of workers, etc.).
- A pipeline is identified and referred to by its user-provided unique name.
- The pipeline program is asynchronously compiled when the pipeline is first created or when its program is subsequently updated.
- Pipeline deployment start is only able to proceed to provisioning once the program is successfully compiled.
- A pipeline cannot be updated while it is deployed.
§Concurrency
Each pipeline has a version, which is incremented each time its core fields are updated. The version is monotonically increasing. There is additionally a program version which covers only the program-related core fields, and is used by the compiler to discern when to recompile.
§Client request handling
§Request outcome expectations
The outcome of a request is that it either fails (e.g., DNS lookup failed) without any response (no status code nor body), or it succeeds and gets back a response status code and body.
In case of a response, usually it is the Feldera endpoint that generated it:
- If it is success (2xx), it will return whichever body belongs to the success response.
- Otherwise, if it is an error (4xx, 5xx), it will return a Feldera error response JSON body
which will have an application-level
error_code.
However, there are two notable exceptions when the response is not generated by the Feldera endpoint:
- If the HTTP server, to which the endpoint belongs, encountered an issue, it might return 4xx (e.g., for an unknown endpoint) or 5xx error codes by itself (e.g., when it is initializing).
- If the Feldera API server is behind a (reverse) proxy, the proxy can return error codes by itself, for example BAD GATEWAY (502) or GATEWAY TIMEOUT (504).
As such, it is not guaranteed that the (4xx, 5xx) will have a Feldera error response JSON body in these latter cases.
§Error handling and retrying
The error type returned by the client should distinguish between the error responses generated by Feldera endpoints themselves (which have a Feldera error response body) and those that are generated by other sources.
In order for a client operation (e.g., pipeline.resume()) to be robust (i.e., not fail due to
a single HTTP request not succeeding) the client should use a retry mechanism if the operation
is idempotent. The retry mechanism must however have a time limit, after which it times out.
This guarantees that the client operation is eventually responsive, which enables the script
it is a part of to not hang indefinitely on Feldera operations and instead be able to decide
by itself whether and how to proceed. If no response is returned, the mechanism should generally
retry. When a response is returned, the decision whether to retry can generally depend on the status
code: especially the status codes 502, 503 and 504 should be considered as transient errors.
Finer grained retry decisions should be made by taking into account the application-level
error_code if the response body was indeed a Feldera error response body.
§Feldera client errors (4xx)
Client behavior: clients should generally return with an error when they get back a 4xx status code, as it usually means the request will likely not succeed even if it is sent again. Certain requests might make use of a timed retry mechanism when the client error is transient without requiring any user intervention to overcome, for instance a transaction already being in progress leading to a temporary CONFLICT (409) error.
-
BAD REQUEST (400): invalid user request (general).
- Example: the new pipeline name
example1@~contains invalid characters.
- Example: the new pipeline name
-
UNAUTHORIZED (401): the user is not authorized to issue the request.
- Example: an invalid API key is provided.
-
NOT FOUND (404): a resource required to exist in order to process the request was not found.
- Example: a pipeline named
exampledoes not exist when trying to update it.
- Example: a pipeline named
-
CONFLICT (409): there is a conflict between the request and a relevant resource.
- Example: a pipeline named
examplealready exists. - Example: another transaction is already in process.
- Example: a pipeline named
§Feldera server errors (5xx)
-
INTERNAL SERVER ERROR (500): the server is unexpectedly unable to process the request (general).
- Example: unable to reach the database.
- Client behavior: immediately return with an error.
-
NOT IMPLEMENTED (501): the server does not implement functionality required to process the request.
- Example: making a request to an enterprise-only endpoint in the OSS edition.
- Client behavior: immediately return with an error.
-
SERVICE UNAVAILABLE (503): the server is not (yet) able to process the request.
- Example: pausing a pipeline which is still provisioning.
- Client behavior: depending on the type of request, client may use a timed retry mechanism.
§Feldera error response body
When the Feldera API returns an HTTP error status code (4xx, 5xx), the body will contain the following JSON object:
{
"message": "Human-readable explanation.",
"error_code": "CodeSpecifyingError",
"details": {
}
}It contains the following fields:
- message (string): human-readable explanation of the error that occurred and potentially hinting what can be done about it.
- error_code (string): application-level code about the error that occurred, written in CamelCase.
For example:
UnknownPipelineName,DuplicateName,PauseWhileNotProvisioned, … . - details (object): JSON object corresponding to the
error_codewith fields that provide details relevant to it. For example: if a name is unknown, a field with the unknown name in question.
Version: 0.163.0
Implementations§
Source§impl Client
impl Client
Sourcepub fn new(baseurl: &str) -> Self
pub fn new(baseurl: &str) -> Self
Create a new client.
baseurl is the base URL provided to the internal
reqwest::Client, and should include a scheme and hostname,
as well as port and a path stem if applicable.
Sourcepub fn new_with_client(baseurl: &str, client: Client) -> Self
pub fn new_with_client(baseurl: &str, client: Client) -> Self
Construct a new client with an existing reqwest::Client,
allowing more control over its configuration.
baseurl is the base URL provided to the internal
reqwest::Client, and should include a scheme and hostname,
as well as port and a path stem if applicable.
Sourcepub fn api_version(&self) -> &'static str
pub fn api_version(&self) -> &'static str
Get the version of this API.
This string is pulled directly from the source OpenAPI document and may be in any format the API selects.
Source§impl Client
impl Client
Sourcepub fn get_config_authentication(&self) -> GetConfigAuthentication<'_>
pub fn get_config_authentication(&self) -> GetConfigAuthentication<'_>
Retrieve authentication provider configuration
Sends a GET request to /config/authentication
let response = client.get_config_authentication()
.send()
.await;Sourcepub fn list_api_keys(&self) -> ListApiKeys<'_>
pub fn list_api_keys(&self) -> ListApiKeys<'_>
Retrieve the list of API keys
Sends a GET request to /v0/api_keys
let response = client.list_api_keys()
.send()
.await;Sourcepub fn post_api_key(&self) -> PostApiKey<'_>
pub fn post_api_key(&self) -> PostApiKey<'_>
Create a new API key
Sends a POST request to /v0/api_keys
Arguments:
body:
let response = client.post_api_key()
.body(body)
.send()
.await;Sourcepub fn get_api_key(&self) -> GetApiKey<'_>
pub fn get_api_key(&self) -> GetApiKey<'_>
Retrieve an API key
Sends a GET request to /v0/api_keys/{api_key_name}
Arguments:
api_key_name: Unique API key name
let response = client.get_api_key()
.api_key_name(api_key_name)
.send()
.await;Sourcepub fn delete_api_key(&self) -> DeleteApiKey<'_>
pub fn delete_api_key(&self) -> DeleteApiKey<'_>
Delete an API key
Sends a DELETE request to /v0/api_keys/{api_key_name}
Arguments:
api_key_name: Unique API key name
let response = client.delete_api_key()
.api_key_name(api_key_name)
.send()
.await;Sourcepub fn get_health(&self) -> GetHealth<'_>
pub fn get_health(&self) -> GetHealth<'_>
Sends a GET request to /v0/cluster_healthz
let response = client.get_health()
.send()
.await;Sourcepub fn get_config(&self) -> GetConfig<'_>
pub fn get_config(&self) -> GetConfig<'_>
Retrieve general configuration
Sends a GET request to /v0/config
let response = client.get_config()
.send()
.await;Sourcepub fn get_config_demos(&self) -> GetConfigDemos<'_>
pub fn get_config_demos(&self) -> GetConfigDemos<'_>
Retrieve the list of demos
Sends a GET request to /v0/config/demos
let response = client.get_config_demos()
.send()
.await;Sourcepub fn get_config_session(&self) -> GetConfigSession<'_>
pub fn get_config_session(&self) -> GetConfigSession<'_>
Retrieve current session information
Sends a GET request to /v0/config/session
let response = client.get_config_session()
.send()
.await;Sourcepub fn get_metrics(&self) -> GetMetrics<'_>
pub fn get_metrics(&self) -> GetMetrics<'_>
Retrieve the metrics of all running pipelines belonging to this tenant
The metrics are collected by making individual HTTP requests to /metrics
endpoint of each pipeline, of which only successful responses are included
in the returned list.
Sends a GET request to /v0/metrics
let response = client.get_metrics()
.send()
.await;Sourcepub fn list_pipelines(&self) -> ListPipelines<'_>
pub fn list_pipelines(&self) -> ListPipelines<'_>
Retrieve the list of pipelines
Configure which fields are included using the selector query parameter.
Sends a GET request to /v0/pipelines
Arguments:
selector: Theselectorparameter limits which fields are returned for a pipeline. Limiting which fields is particularly handy for instance when frequently monitoring over low bandwidth connections while being only interested in pipeline status.
let response = client.list_pipelines()
.selector(selector)
.send()
.await;Sourcepub fn post_pipeline(&self) -> PostPipeline<'_>
pub fn post_pipeline(&self) -> PostPipeline<'_>
Create a new pipeline
Sends a POST request to /v0/pipelines
let response = client.post_pipeline()
.body(body)
.send()
.await;Sourcepub fn get_pipeline(&self) -> GetPipeline<'_>
pub fn get_pipeline(&self) -> GetPipeline<'_>
Retrieve a pipeline
Configure which fields are included using the selector query parameter.
Sends a GET request to /v0/pipelines/{pipeline_name}
Arguments:
pipeline_name: Unique pipeline nameselector: Theselectorparameter limits which fields are returned for a pipeline. Limiting which fields is particularly handy for instance when frequently monitoring over low bandwidth connections while being only interested in pipeline status.
let response = client.get_pipeline()
.pipeline_name(pipeline_name)
.selector(selector)
.send()
.await;Sourcepub fn put_pipeline(&self) -> PutPipeline<'_>
pub fn put_pipeline(&self) -> PutPipeline<'_>
Fully update a pipeline if it already exists, otherwise create a new pipeline
Sends a PUT request to /v0/pipelines/{pipeline_name}
Arguments:
pipeline_name: Unique pipeline namebody
let response = client.put_pipeline()
.pipeline_name(pipeline_name)
.body(body)
.send()
.await;Sourcepub fn delete_pipeline(&self) -> DeletePipeline<'_>
pub fn delete_pipeline(&self) -> DeletePipeline<'_>
Delete a pipeline
Sends a DELETE request to /v0/pipelines/{pipeline_name}
Arguments:
pipeline_name: Unique pipeline name
let response = client.delete_pipeline()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn patch_pipeline(&self) -> PatchPipeline<'_>
pub fn patch_pipeline(&self) -> PatchPipeline<'_>
Partially update a pipeline
Sends a PATCH request to /v0/pipelines/{pipeline_name}
Arguments:
pipeline_name: Unique pipeline namebody
let response = client.patch_pipeline()
.pipeline_name(pipeline_name)
.body(body)
.send()
.await;Sourcepub fn post_pipeline_activate(&self) -> PostPipelineActivate<'_>
pub fn post_pipeline_activate(&self) -> PostPipelineActivate<'_>
Requests the pipeline to activate if it is currently in standby mode, which it will do
asynchronously.
Progress should be monitored by polling the pipeline GET endpoints.
This endpoint is only applicable when the pipeline is configured to start from object store and started as standby.
Sends a POST request to /v0/pipelines/{pipeline_name}/activate
Arguments:
pipeline_name: Unique pipeline name
let response = client.post_pipeline_activate()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn checkpoint_pipeline(&self) -> CheckpointPipeline<'_>
pub fn checkpoint_pipeline(&self) -> CheckpointPipeline<'_>
Initiates checkpoint for a running or paused pipeline
Returns a checkpoint sequence number that can be used with /checkpoint_status to
determine when the checkpoint has completed.
Sends a POST request to /v0/pipelines/{pipeline_name}/checkpoint
Arguments:
pipeline_name: Unique pipeline name
let response = client.checkpoint_pipeline()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn sync_checkpoint(&self) -> SyncCheckpoint<'_>
pub fn sync_checkpoint(&self) -> SyncCheckpoint<'_>
Syncs latest checkpoints to the object store configured in pipeline config
Sends a POST request to /v0/pipelines/{pipeline_name}/checkpoint/sync
Arguments:
pipeline_name: Unique pipeline name
let response = client.sync_checkpoint()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn get_checkpoint_sync_status(&self) -> GetCheckpointSyncStatus<'_>
pub fn get_checkpoint_sync_status(&self) -> GetCheckpointSyncStatus<'_>
Retrieve status of checkpoint sync activity in a pipeline
Sends a GET request to /v0/pipelines/{pipeline_name}/checkpoint/sync_status
Arguments:
pipeline_name: Unique pipeline name
let response = client.get_checkpoint_sync_status()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn get_checkpoint_status(&self) -> GetCheckpointStatus<'_>
pub fn get_checkpoint_status(&self) -> GetCheckpointStatus<'_>
Retrieve status of checkpoint activity in a pipeline
Sends a GET request to /v0/pipelines/{pipeline_name}/checkpoint_status
Arguments:
pipeline_name: Unique pipeline name
let response = client.get_checkpoint_status()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn get_pipeline_circuit_profile(&self) -> GetPipelineCircuitProfile<'_>
pub fn get_pipeline_circuit_profile(&self) -> GetPipelineCircuitProfile<'_>
Retrieve the circuit performance profile of a running or paused pipeline
Sends a GET request to /v0/pipelines/{pipeline_name}/circuit_profile
Arguments:
pipeline_name: Unique pipeline name
let response = client.get_pipeline_circuit_profile()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn post_pipeline_clear(&self) -> PostPipelineClear<'_>
pub fn post_pipeline_clear(&self) -> PostPipelineClear<'_>
Clears the pipeline storage asynchronously
IMPORTANT: Clearing means disassociating the storage from the pipeline. Depending on the storage type this can include its deletion.
It sets the storage state to Clearing, after which the clearing process is
performed asynchronously. Progress should be monitored by polling the pipeline
using the GET endpoints. An /clear cannot be cancelled.
Sends a POST request to /v0/pipelines/{pipeline_name}/clear
Arguments:
pipeline_name: Unique pipeline name
let response = client.post_pipeline_clear()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn commit_transaction(&self) -> CommitTransaction<'_>
pub fn commit_transaction(&self) -> CommitTransaction<'_>
Commit the current transaction
Sends a POST request to /v0/pipelines/{pipeline_name}/commit_transaction
Arguments:
pipeline_name: Unique pipeline name
let response = client.commit_transaction()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn completion_status(&self) -> CompletionStatus<'_>
pub fn completion_status(&self) -> CompletionStatus<'_>
Check the status of a completion token returned by the /ingress or /completion_token
endpoint.
Sends a GET request to /v0/pipelines/{pipeline_name}/completion_status
Arguments:
pipeline_name: Unique pipeline nametoken: Completion token returned by the ‘/ingress’ or ‘/completion_status’ endpoint.
let response = client.completion_status()
.pipeline_name(pipeline_name)
.token(token)
.send()
.await;Sourcepub fn http_output(&self) -> HttpOutput<'_>
pub fn http_output(&self) -> HttpOutput<'_>
Subscribe to a stream of updates from a SQL view or table
The pipeline responds with a continuous stream of changes to the specified
table or view, encoded using the format specified in the ?format=
parameter. Updates are split into Chunks.
The pipeline continues sending updates until the client closes the connection or the pipeline is stopped.
Sends a POST request to /v0/pipelines/{pipeline_name}/egress/{table_name}
Arguments:
pipeline_name: Unique pipeline nametable_name: SQL table name. Unquoted SQL names have to be capitalized. Quoted SQL names have to exactly match the case from the SQL program.array: Set totrueto group updates in this stream into JSON arrays (used in conjunction withformat=json). The default value isfalsebackpressure: Apply backpressure on the pipeline when the HTTP client cannot receive data fast enough. When this flag is set to false (the default), the HTTP connector drops data chunks if the client is not keeping up with its output. This prevents a slow HTTP client from slowing down the entire pipeline. When the flag is set to true, the connector waits for the client to receive each chunk and blocks the pipeline if the client cannot keep up.format: Output data format, e.g., ‘csv’ or ‘json’.
let response = client.http_output()
.pipeline_name(pipeline_name)
.table_name(table_name)
.array(array)
.backpressure(backpressure)
.format(format)
.send()
.await;Sourcepub fn get_pipeline_heap_profile(&self) -> GetPipelineHeapProfile<'_>
pub fn get_pipeline_heap_profile(&self) -> GetPipelineHeapProfile<'_>
Retrieve the heap profile of a running or paused pipeline
Sends a GET request to /v0/pipelines/{pipeline_name}/heap_profile
Arguments:
pipeline_name: Unique pipeline name
let response = client.get_pipeline_heap_profile()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn http_input(&self) -> HttpInput<'_>
pub fn http_input(&self) -> HttpInput<'_>
Push data to a SQL table
The client sends data encoded using the format specified in the ?format=
parameter as a body of the request. The contents of the data must match
the SQL table schema specified in table_name
The pipeline ingests data as it arrives without waiting for the end of the request. Successful HTTP response indicates that all data has been ingested successfully.
On success, returns a completion token that can be passed to the ‘/completion_status’ endpoint to check whether the pipeline has fully processed the data.
Sends a POST request to /v0/pipelines/{pipeline_name}/ingress/{table_name}
Arguments:
pipeline_name: Unique pipeline nametable_name: SQL table name. Unquoted SQL names have to be capitalized. Quoted SQL names have to exactly match the case from the SQL program.array: Set totrueif updates in this stream are packaged into JSON arrays (used in conjunction withformat=json). The default values isfalse.force: Whentrue, push data to the pipeline even if the pipeline is paused. The default value isfalseformat: Input data format, e.g., ‘csv’ or ‘json’.update_format: JSON data change event format (used in conjunction withformat=json). The default value is ‘insert_delete’.body: Input data in the specified format
let response = client.http_input()
.pipeline_name(pipeline_name)
.table_name(table_name)
.array(array)
.force(force)
.format(format)
.update_format(update_format)
.body(body)
.send()
.await;Sourcepub fn get_pipeline_logs(&self) -> GetPipelineLogs<'_>
pub fn get_pipeline_logs(&self) -> GetPipelineLogs<'_>
Retrieve logs of a pipeline as a stream
The logs stream catches up to the extent of the internally configured per-pipeline circular logs buffer (limited to a certain byte size and number of lines, whichever is reached first). After the catch-up, new lines are pushed whenever they become available.
It is possible for the logs stream to end prematurely due to the API server temporarily losing connection to the runner. In this case, it is needed to issue again a new request to this endpoint.
The logs stream will end when the pipeline is deleted, or if the runner restarts. Note that in both cases the logs will be cleared.
Sends a GET request to /v0/pipelines/{pipeline_name}/logs
Arguments:
pipeline_name: Unique pipeline name
let response = client.get_pipeline_logs()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn get_pipeline_metrics(&self) -> GetPipelineMetrics<'_>
pub fn get_pipeline_metrics(&self) -> GetPipelineMetrics<'_>
Retrieve circuit metrics of a running or paused pipeline
Sends a GET request to /v0/pipelines/{pipeline_name}/metrics
Arguments:
pipeline_name: Unique pipeline nameformat
let response = client.get_pipeline_metrics()
.pipeline_name(pipeline_name)
.format(format)
.send()
.await;Sourcepub fn post_pipeline_pause(&self) -> PostPipelinePause<'_>
pub fn post_pipeline_pause(&self) -> PostPipelinePause<'_>
Requests the pipeline to pause, which it will do asynchronously
Progress should be monitored by polling the pipeline GET endpoints.
Sends a POST request to /v0/pipelines/{pipeline_name}/pause
Arguments:
pipeline_name: Unique pipeline name
let response = client.post_pipeline_pause()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn pipeline_adhoc_sql(&self) -> PipelineAdhocSql<'_>
pub fn pipeline_adhoc_sql(&self) -> PipelineAdhocSql<'_>
Execute an ad-hoc SQL query in a running or paused pipeline
The evaluation is not incremental.
Sends a GET request to /v0/pipelines/{pipeline_name}/query
Arguments:
pipeline_name: Unique pipeline nameformat: Input data format, e.g., ‘text’, ‘json’ or ‘parquet’sql: SQL query to execute
let response = client.pipeline_adhoc_sql()
.pipeline_name(pipeline_name)
.format(format)
.sql(sql)
.send()
.await;Sourcepub fn post_pipeline_resume(&self) -> PostPipelineResume<'_>
pub fn post_pipeline_resume(&self) -> PostPipelineResume<'_>
Requests the pipeline to resume, which it will do asynchronously
Progress should be monitored by polling the pipeline GET endpoints.
Sends a POST request to /v0/pipelines/{pipeline_name}/resume
Arguments:
pipeline_name: Unique pipeline name
let response = client.post_pipeline_resume()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn post_pipeline_start(&self) -> PostPipelineStart<'_>
pub fn post_pipeline_start(&self) -> PostPipelineStart<'_>
Start the pipeline asynchronously by updating the desired status
The endpoint returns immediately after setting the desired status.
The procedure to get to the desired status is performed asynchronously.
Progress should be monitored by polling the pipeline GET endpoints.
Note the following:
- A stopped pipeline can be started through calling
/start?initial=running,/start?initial=paused, or/start?initial=standby. - If the pipeline is already (being) started (provisioned), it will still return success
- It is not possible to call
/startwhen the pipeline has already had/stopcalled and is in the process of suspending or stopping.
Sends a POST request to /v0/pipelines/{pipeline_name}/start
Arguments:
pipeline_name: Unique pipeline nameinitial: Theinitialparameter determines whether to after provisioning the pipeline make it becomestandby,pausedorrunning(only valid values).
let response = client.post_pipeline_start()
.pipeline_name(pipeline_name)
.initial(initial)
.send()
.await;Sourcepub fn start_transaction(&self) -> StartTransaction<'_>
pub fn start_transaction(&self) -> StartTransaction<'_>
Start a transaction
Sends a POST request to /v0/pipelines/{pipeline_name}/start_transaction
Arguments:
pipeline_name: Unique pipeline name
let response = client.start_transaction()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn get_pipeline_stats(&self) -> GetPipelineStats<'_>
pub fn get_pipeline_stats(&self) -> GetPipelineStats<'_>
Retrieve statistics (e.g., performance counters) of a running or paused pipeline
Sends a GET request to /v0/pipelines/{pipeline_name}/stats
Arguments:
pipeline_name: Unique pipeline name
let response = client.get_pipeline_stats()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn post_pipeline_stop(&self) -> PostPipelineStop<'_>
pub fn post_pipeline_stop(&self) -> PostPipelineStop<'_>
Stop the pipeline asynchronously by updating the desired state
There are two variants:
/stop?force=false(default): the pipeline will first atomically checkpoint before deprovisioning the compute resources. When resuming, the pipeline will start from this/stop?force=true: the compute resources will be immediately deprovisioned. When resuming, it will pick up the latest checkpoint made by the periodic checkpointer or by a prior/checkpointcall.
The endpoint returns immediately after setting the desired state to Suspended for
?force=false or Stopped for ?force=true. In the former case, once the pipeline has
successfully passes the Suspending state, the desired state will become Stopped as well.
The procedure to get to the desired state is performed asynchronously. Progress should be
monitored by polling the pipeline GET endpoints.
Note the following:
- The suspending that is done with
/stop?force=falseis not guaranteed to succeed: - If an error is returned during the suspension, the pipeline will be forcefully stopped with that error set
- Otherwise, it will keep trying to suspend, in which case it is possible to cancel suspending
by calling
/stop?force=true /stop?force=truecannot be cancelled: the pipeline must first reachStoppedbefore another action can be done- A pipeline which is in the process of suspending or stopping can only be forcefully stopped
Sends a POST request to /v0/pipelines/{pipeline_name}/stop
Arguments:
pipeline_name: Unique pipeline nameforce: Theforceparameter determines whether to immediately deprovision the pipeline compute resources (force=true) or first attempt to atomically checkpoint before doing so (force=false, which is the default).
let response = client.post_pipeline_stop()
.pipeline_name(pipeline_name)
.force(force)
.send()
.await;Sourcepub fn get_pipeline_support_bundle(&self) -> GetPipelineSupportBundle<'_>
pub fn get_pipeline_support_bundle(&self) -> GetPipelineSupportBundle<'_>
Generate a support bundle for a pipeline
This endpoint collects various diagnostic data from the pipeline including circuit profile, heap profile, metrics, logs, stats, and connector statistics, and packages them into a single ZIP file for support purposes.
Sends a GET request to /v0/pipelines/{pipeline_name}/support_bundle
Arguments:
pipeline_name: Unique pipeline namecircuit_profile: Whether to collect circuit profile data (default: true)heap_profile: Whether to collect heap profile data (default: true)logs: Whether to collect logs data (default: true)metrics: Whether to collect metrics data (default: true)pipeline_config: Whether to collect pipeline configuration data (default: true)stats: Whether to collect stats data (default: true)system_config: Whether to collect system configuration data (default: true)
let response = client.get_pipeline_support_bundle()
.pipeline_name(pipeline_name)
.circuit_profile(circuit_profile)
.heap_profile(heap_profile)
.logs(logs)
.metrics(metrics)
.pipeline_config(pipeline_config)
.stats(stats)
.system_config(system_config)
.send()
.await;Sourcepub fn completion_token(&self) -> CompletionToken<'_>
pub fn completion_token(&self) -> CompletionToken<'_>
Generate a completion token for an input connector
Returns a token that can be passed to the /completion_status endpoint
to check whether the pipeline has finished processing all inputs received from the
connector before the token was generated.
Sends a GET request to /v0/pipelines/{pipeline_name}/tables/{table_name}/connectors/{connector_name}/completion_token
Arguments:
pipeline_name: Unique pipeline nametable_name: SQL table name. Unquoted SQL names have to be capitalized. Quoted SQL names have to exactly match the case from the SQL program.connector_name: Unique input connector name
let response = client.completion_token()
.pipeline_name(pipeline_name)
.table_name(table_name)
.connector_name(connector_name)
.send()
.await;Sourcepub fn get_pipeline_input_connector_status(
&self,
) -> GetPipelineInputConnectorStatus<'_>
pub fn get_pipeline_input_connector_status( &self, ) -> GetPipelineInputConnectorStatus<'_>
Retrieve the status of an input connector
Sends a GET request to /v0/pipelines/{pipeline_name}/tables/{table_name}/connectors/{connector_name}/stats
Arguments:
pipeline_name: Unique pipeline nametable_name: Unique table nameconnector_name: Unique input connector name
let response = client.get_pipeline_input_connector_status()
.pipeline_name(pipeline_name)
.table_name(table_name)
.connector_name(connector_name)
.send()
.await;Sourcepub fn post_pipeline_input_connector_action(
&self,
) -> PostPipelineInputConnectorAction<'_>
pub fn post_pipeline_input_connector_action( &self, ) -> PostPipelineInputConnectorAction<'_>
Start (resume) or pause the input connector
The following values of the action argument are accepted: start and pause.
Input connectors can be in either the Running or Paused state. By default,
connectors are initialized in the Running state when a pipeline is deployed.
In this state, the connector actively fetches data from its configured data
source and forwards it to the pipeline. If needed, a connector can be created
in the Paused state by setting its
paused property
to true. When paused, the connector remains idle until reactivated using the
start command. Conversely, a connector in the Running state can be paused
at any time by issuing the pause command.
The current connector state can be retrieved via the
GET /v0/pipelines/{pipeline_name}/stats endpoint.
Note that only if both the pipeline and the connector state is Running,
is the input connector active.
Pipeline state Connector state Connector is active?
-------------- --------------- --------------------
Paused Paused No
Paused Running No
Running Paused No
Running Running YesSends a POST request to /v0/pipelines/{pipeline_name}/tables/{table_name}/connectors/{connector_name}/{action}
Arguments:
pipeline_name: Unique pipeline nametable_name: Unique table nameconnector_name: Unique input connector nameaction: Input connector action (one of: start, pause)
let response = client.post_pipeline_input_connector_action()
.pipeline_name(pipeline_name)
.table_name(table_name)
.connector_name(connector_name)
.action(action)
.send()
.await;Sourcepub fn post_pipeline_testing(&self) -> PostPipelineTesting<'_>
pub fn post_pipeline_testing(&self) -> PostPipelineTesting<'_>
This endpoint is used as part of the test harness. Only available if the testing
unstable feature is enabled. Do not use in production.
Sends a POST request to /v0/pipelines/{pipeline_name}/testing
Arguments:
pipeline_name: Unique pipeline nameset_platform_version
let response = client.post_pipeline_testing()
.pipeline_name(pipeline_name)
.set_platform_version(set_platform_version)
.send()
.await;Sourcepub fn get_pipeline_time_series(&self) -> GetPipelineTimeSeries<'_>
pub fn get_pipeline_time_series(&self) -> GetPipelineTimeSeries<'_>
Retrieve time series for statistics of a running or paused pipeline
Sends a GET request to /v0/pipelines/{pipeline_name}/time_series
Arguments:
pipeline_name: Unique pipeline name
let response = client.get_pipeline_time_series()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn get_pipeline_time_series_stream(&self) -> GetPipelineTimeSeriesStream<'_>
pub fn get_pipeline_time_series_stream(&self) -> GetPipelineTimeSeriesStream<'_>
Stream time series for statistics of a running or paused pipeline
Returns a snapshot of all existing time series data followed by a continuous stream of new time series data points as they become available. The response is in newline-delimited JSON format (NDJSON) where each line is a JSON object representing a single time series data point.
Sends a GET request to /v0/pipelines/{pipeline_name}/time_series_stream
Arguments:
pipeline_name: Unique pipeline name
let response = client.get_pipeline_time_series_stream()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn post_update_runtime(&self) -> PostUpdateRuntime<'_>
pub fn post_update_runtime(&self) -> PostUpdateRuntime<'_>
Recompile a pipeline with the Feldera runtime version included in the
currently installed Feldera platform.
Use this endpoint after upgrading Feldera to rebuild pipelines that were compiled with older platform versions. In most cases, recompilation is not required; pipelines compiled with older versions will continue to run on the upgraded platform.
Situations where recompilation may be necessary:
- To benefit from the latest bug fixes and performance optimizations.
- When backward-incompatible changes are introduced in Feldera. In this case, attempting to start a pipeline compiled with an unsupported version will result in an error.
If the pipeline is already compiled with the current platform version, this operation is a no-op.
Note that recompiling the pipeline with a new platform version may change its query plan. If the modified pipeline is started from an existing checkpoint, it may require bootstrapping parts of its state from scratch. See Feldera documentation for details on the bootstrapping process.
Sends a POST request to /v0/pipelines/{pipeline_name}/update_runtime
Arguments:
pipeline_name: Unique pipeline name
let response = client.post_update_runtime()
.pipeline_name(pipeline_name)
.send()
.await;Sourcepub fn get_pipeline_output_connector_status(
&self,
) -> GetPipelineOutputConnectorStatus<'_>
pub fn get_pipeline_output_connector_status( &self, ) -> GetPipelineOutputConnectorStatus<'_>
Retrieve the status of an output connector
Sends a GET request to /v0/pipelines/{pipeline_name}/views/{view_name}/connectors/{connector_name}/stats
Arguments:
pipeline_name: Unique pipeline nameview_name: Unique SQL view nameconnector_name: Unique output connector name
let response = client.get_pipeline_output_connector_status()
.pipeline_name(pipeline_name)
.view_name(view_name)
.connector_name(connector_name)
.send()
.await;