{
"openapi": "3.0.3",
"info": {
"title": "Feldera API",
"description": "\nWith Feldera, users create data pipelines out of SQL programs.\nA SQL program comprises tables and views, and includes as well the definition of\ninput and output connectors for each respectively. A connector defines a data\nsource or data sink to feed input data into tables or receive output data\ncomputed by the views respectively.\n\n## Pipeline\n\nThe API is centered around the **pipeline**, which most importantly consists\nout of the SQL program, but also has accompanying metadata and configuration parameters\n(e.g., compilation profile, number of workers, etc.).\n\n* A pipeline is identified and referred to by its user-provided unique name.\n* The pipeline program is asynchronously compiled when the pipeline is first created or\n when its program is subsequently updated.\n* Pipeline deployment start is only able to proceed to provisioning once the program is successfully\n compiled.\n* A pipeline cannot be updated while it is deployed.\n\n## Concurrency\n\nEach pipeline has a version, which is incremented each time its core fields are updated.\nThe version is monotonically increasing. There is additionally a program version which covers\nonly the program-related core fields, and is used by the compiler to discern when to recompile.\n\n## Client request handling\n\n### Request outcome expectations\n\nThe outcome of a request is that it either fails (e.g., DNS lookup failed) without any response\n(no status code nor body), or it succeeds and gets back a response status code and body.\n\nIn case of a response, usually it is the Feldera endpoint that generated it:\n- If it is success (2xx), it will return whichever body belongs to the success response.\n- Otherwise, if it is an error (4xx, 5xx), it will return a Feldera error response JSON body\n which will have an application-level `error_code`.\n\nHowever, there are two notable exceptions when the response is not generated by the Feldera\nendpoint:\n- If the HTTP server, to which the endpoint belongs, encountered an issue, it might return\n 4xx (e.g., for an unknown endpoint) or 5xx error codes by itself (e.g., when it is initializing).\n- If the Feldera API server is behind a (reverse) proxy, the proxy can return error codes by itself,\n for example BAD GATEWAY (502) or GATEWAY TIMEOUT (504).\n\nAs such, it is not guaranteed that the (4xx, 5xx) will have a Feldera error response JSON body\nin these latter cases.\n\n### Error handling and retrying\n\nThe error type returned by the client should distinguish between the error responses generated\nby Feldera endpoints themselves (which have a Feldera error response body) and those that are\ngenerated by other sources.\n\nIn order for a client operation (e.g., `pipeline.resume()`) to be robust (i.e., not fail due to\na single HTTP request not succeeding) the client should use a retry mechanism if the operation\nis idempotent. The retry mechanism must however have a time limit, after which it times out.\nThis guarantees that the client operation is eventually responsive, which enables the script\nit is a part of to not hang indefinitely on Feldera operations and instead be able to decide\nby itself whether and how to proceed. If no response is returned, the mechanism should generally\nretry. When a response is returned, the decision whether to retry can generally depend on the status\ncode: especially the status codes 408, 502, 503 and 504 should be considered as transient errors.\nFiner grained retry decisions should be made by taking into account the application-level\n`error_code` if the response body was indeed a Feldera error response body.\n\n## Feldera client errors (4xx)\n\n_Client behavior:_ clients should generally return with an error when they get back a 4xx status\ncode, as it usually means the request will likely not succeed even if it is sent again. Certain\nrequests might make use of a timed retry mechanism when the client error is transient without\nrequiring any user intervention to overcome, for instance a transaction already being in progress\nleading to a temporary CONFLICT (409) error.\n\n- **BAD REQUEST (400)**: invalid user request (general).\n - _Example:_ the new pipeline name `example1@~` contains invalid characters.\n\n- **UNAUTHORIZED (401)**: the user is not authorized to issue the request.\n - _Example:_ an invalid API key is provided.\n\n- **NOT FOUND (404)**: a resource required to exist in order to process the request was not found.\n - _Example:_ a pipeline named `example` does not exist when trying to update it.\n\n- **CONFLICT (409)**: there is a conflict between the request and a relevant resource.\n - _Example:_ a pipeline named `example` already exists.\n - _Example:_ another transaction is already in process.\n\n## Feldera server errors (5xx)\n\n- **INTERNAL SERVER ERROR (500)**: the server is unexpectedly unable to process the request\n (general).\n - _Example:_ unable to reach the database.\n - _Client behavior:_ immediately return with an error.\n\n- **NOT IMPLEMENTED (501)**: the server does not implement functionality required to process the\n request.\n - _Example:_ making a request to an enterprise-only endpoint in the OSS edition.\n - _Client behavior:_ immediately return with an error.\n\n- **SERVICE UNAVAILABLE (503)**: the server is not (yet) able to process the request.\n - _Example:_ pausing a pipeline which is still provisioning.\n - _Client behavior:_ depending on the type of request, client may use a timed retry mechanism.\n\n## Feldera error response body\n\nWhen the Feldera API returns an HTTP error status code (4xx, 5xx), the body will contain the\nfollowing JSON object:\n\n```json\n{\n \"message\": \"Human-readable explanation.\",\n \"error_code\": \"CodeSpecifyingError\",\n \"details\": {\n\n }\n}\n```\n\nIt contains the following fields:\n- **message (string)**: human-readable explanation of the error that occurred and potentially\n hinting what can be done about it.\n- **error_code (string)**: application-level code about the error that occurred, written in CamelCase.\n For example: `UnknownPipelineName`, `DuplicateName`, `PauseWhileNotProvisioned`, ... .\n- **details (object)**: JSON object corresponding to the `error_code` with fields that provide\n details relevant to it. For example: if a name is unknown, a field with the unknown name in\n question.\n",
"contact": {
"name": "Feldera Team",
"email": "dev@feldera.com"
},
"license": {
"name": "MIT OR Apache-2.0"
},
"version": "0.239.0"
},
"paths": {
"/config/authentication": {
"get": {
"tags": [
"Platform"
],
"summary": "Get Auth Config",
"description": "Retrieve the authentication provider configuration.",
"operationId": "get_config_authentication",
"responses": {
"200": {
"description": "The response body contains Authentication Provider configuration, or is empty if no auth is configured.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/AuthProvider"
}
}
}
},
"500": {
"description": "Request failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
}
}
},
"/v0/api_keys": {
"get": {
"tags": [
"Platform"
],
"summary": "List API Keys",
"description": "Retrieve a list of your API keys.",
"operationId": "list_api_keys",
"responses": {
"200": {
"description": "API keys retrieved successfully",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ApiKeyDescr"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
},
"post": {
"tags": [
"Platform"
],
"summary": "Create API Key",
"description": "Create a new API key with the specified name. The generated API key\nwill be returned in the response and cannot be retrieved again later.",
"operationId": "post_api_key",
"requestBody": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/NewApiKeyRequest"
}
}
},
"required": true
},
"responses": {
"201": {
"description": "API key created successfully",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/NewApiKeyResponse"
}
}
}
},
"409": {
"description": "API key with that name already exists",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "An entity with this name already exists",
"error_code": "DuplicateName",
"details": null
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/api_keys/{api_key_name}": {
"get": {
"tags": [
"Platform"
],
"summary": "Get API Key",
"description": "Retrieve the metadata of a specific API key by its name.",
"operationId": "get_api_key",
"parameters": [
{
"name": "api_key_name",
"in": "path",
"description": "Unique API key name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "API key retrieved successfully",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ApiKeyDescr"
}
}
}
},
"404": {
"description": "API key with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown API key 'non-existent-api-key'",
"error_code": "UnknownApiKey",
"details": {
"name": "non-existent-api-key"
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
},
"delete": {
"tags": [
"Platform"
],
"summary": "Delete API Key",
"description": "Remove an API key by its name.",
"operationId": "delete_api_key",
"parameters": [
{
"name": "api_key_name",
"in": "path",
"description": "Unique API key name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "API key deleted successfully"
},
"404": {
"description": "API key with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown API key 'non-existent-api-key'",
"error_code": "UnknownApiKey",
"details": {
"name": "non-existent-api-key"
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/cluster/events": {
"get": {
"tags": [
"Platform"
],
"summary": "List Cluster Events",
"description": "Retrieve a list of retained cluster monitor events ordered from most recent to least recent.\n\nThe returned events only have limited details, the full details can be retrieved using\nthe `GET /v0/cluster/events/<event-id>` endpoint.\n\nCluster monitor events are collected at a periodic interval (every 10s), however only\nevery 10 minutes or if the overall health changes, does it get inserted into the database\n(and thus, served by this endpoint). At most 1000 events are retained (newest first),\nand events older than 72h are deleted. The latest event, if it already exists, is never\ncleaned up.",
"operationId": "list_cluster_events",
"responses": {
"200": {
"description": "",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ClusterMonitorEventSelectedInfo"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"501": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/cluster/events/{event_id}": {
"get": {
"tags": [
"Platform"
],
"summary": "Get Cluster Event",
"description": "Get specific cluster monitor event.\n\nThe identifiers of the events can be retrieved via `GET /v0/cluster/events`.\nAt most 1000 events are retained (newest first), and events older than 72h are deleted.\nThe latest event, if it already exists, is never cleaned up.\nThis endpoint can return a 404 for an event that no longer exists due to clean-up.",
"operationId": "get_cluster_event",
"parameters": [
{
"name": "event_id",
"in": "path",
"description": "Cluster monitor event identifier or `latest`",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "selector",
"in": "query",
"description": "The `selector` parameter limits which fields are returned.\nLimiting which fields is particularly handy for instance when frequently\nmonitoring over low bandwidth connections while being only interested\nin status.",
"required": false,
"schema": {
"$ref": "#/components/schemas/ClusterMonitorEventFieldSelector"
}
}
],
"responses": {
"200": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ClusterMonitorEventSelectedInfo"
}
}
}
},
"404": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"501": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/cluster_healthz": {
"get": {
"tags": [
"Platform"
],
"summary": "Check Cluster Health",
"description": "Determine the latest cluster health via the latest cluster monitor event.",
"operationId": "get_cluster_health",
"responses": {
"200": {
"description": "All services healthy",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HealthStatus"
}
}
}
},
"503": {
"description": "One or more services unhealthy",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/HealthStatus"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/config": {
"get": {
"tags": [
"Platform"
],
"summary": "Get Platform Config",
"description": "Retrieve configuration of the Feldera Platform.",
"operationId": "get_config",
"responses": {
"200": {
"description": "The response body contains basic configuration information about this host.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Configuration"
}
}
}
},
"500": {
"description": "Request failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/config/demos": {
"get": {
"tags": [
"Platform"
],
"summary": "List Demos",
"description": "Retrieve the list of demos available in the WebConsole.",
"operationId": "get_config_demos",
"responses": {
"200": {
"description": "List of demos",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Demo"
}
}
}
}
},
"500": {
"description": "Failed to read demos from the demos directories",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/config/session": {
"get": {
"tags": [
"Platform"
],
"summary": "Get Session",
"description": "Retrieve login session information for your current user session.",
"operationId": "get_config_session",
"responses": {
"200": {
"description": "The response body contains current session information including tenant details.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/SessionInfo"
}
}
}
},
"500": {
"description": "Request failed.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/metrics": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "List All Metrics",
"description": "Retrieve the metrics of all running pipelines belonging to this tenant.\n\nThe metrics are collected by making individual HTTP requests to `/metrics`\nendpoint of each pipeline, of which only successful responses are included\nin the returned list.",
"operationId": "get_metrics",
"responses": {
"200": {
"description": "Metrics of all running pipelines belonging to this tenant in Prometheus format",
"content": {
"text/plain": {
"schema": {
"type": "string",
"format": "binary"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines": {
"get": {
"tags": [
"Pipeline CRUD"
],
"summary": "List Pipelines",
"description": "Retrieve the list of pipelines.\nConfigure which fields are included using the `selector` query parameter.",
"operationId": "list_pipelines",
"parameters": [
{
"name": "selector",
"in": "query",
"description": "The `selector` parameter limits which fields are returned for a pipeline.\nLimiting which fields is particularly handy for instance when frequently\nmonitoring over low bandwidth connections while being only interested\nin pipeline status.",
"required": false,
"schema": {
"$ref": "#/components/schemas/PipelineFieldSelector"
}
}
],
"responses": {
"200": {
"description": "List of pipelines retrieved successfully",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PipelineSelectedInfo"
}
},
"example": [
{
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"name": "example1",
"description": "Description of the pipeline example1",
"created_at": "1970-01-01T00:00:00Z",
"version": 4,
"platform_version": "v0",
"runtime_config": {
"workers": 16,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": true,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 0,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 1000000,
"pin_cpus": [],
"provisioning_timeout_secs": null,
"max_parallel_connector_init": null,
"init_containers": null,
"checkpoint_during_suspend": true,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table1 ( col1 INT );",
"udf_rust": "",
"udf_toml": "",
"program_config": {
"profile": "optimized",
"cache": true,
"runtime_version": null
},
"program_version": 2,
"program_status": "Pending",
"program_status_since": "1970-01-01T00:00:00Z",
"program_error": {
"sql_compilation": null,
"rust_compilation": null,
"system_error": null
},
"program_info": null,
"deployment_error": null,
"refresh_version": 4,
"storage_status": "Cleared",
"deployment_id": null,
"deployment_initial": null,
"deployment_status": "Stopped",
"deployment_status_since": "1970-01-01T00:00:00Z",
"deployment_desired_status": "Stopped",
"deployment_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_status": "Stopped",
"deployment_resources_status_details": null,
"deployment_resources_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_desired_status": "Stopped",
"deployment_resources_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_runtime_status": null,
"deployment_runtime_status_details": null,
"deployment_runtime_status_since": null,
"deployment_runtime_desired_status": null,
"deployment_runtime_desired_status_since": null
},
{
"id": "67e55044-10b1-426f-9247-bb680e5fe0c9",
"name": "example2",
"description": "Description of the pipeline example2",
"created_at": "1970-01-01T00:00:00Z",
"version": 1,
"platform_version": "v0",
"runtime_config": {
"workers": 10,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": false,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 100000,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": 1000,
"memory_mb_max": null,
"storage_mb_max": 10000,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 100000,
"pin_cpus": [],
"provisioning_timeout_secs": 1200,
"max_parallel_connector_init": 10,
"init_containers": null,
"checkpoint_during_suspend": false,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table2 ( col2 VARCHAR );",
"udf_rust": "",
"udf_toml": "",
"program_config": {
"profile": "unoptimized",
"cache": true,
"runtime_version": null
},
"program_version": 1,
"program_status": "Pending",
"program_status_since": "1970-01-01T00:00:00Z",
"program_error": {
"sql_compilation": null,
"rust_compilation": null,
"system_error": null
},
"program_info": null,
"deployment_error": null,
"refresh_version": 1,
"storage_status": "Cleared",
"deployment_id": null,
"deployment_initial": null,
"deployment_status": "Stopped",
"deployment_status_since": "1970-01-01T00:00:00Z",
"deployment_desired_status": "Stopped",
"deployment_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_status": "Stopped",
"deployment_resources_status_details": null,
"deployment_resources_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_desired_status": "Stopped",
"deployment_resources_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_runtime_status": null,
"deployment_runtime_status_details": null,
"deployment_runtime_status_since": null,
"deployment_runtime_desired_status": null,
"deployment_runtime_desired_status_since": null
}
]
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
},
"post": {
"tags": [
"Pipeline CRUD"
],
"summary": "Create Pipeline",
"description": "Create a new pipeline with the provided configuration.",
"operationId": "post_pipeline",
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PostPutPipeline"
},
"example": {
"name": "example1",
"description": "Description of the pipeline example1",
"runtime_config": {
"workers": 16,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": true,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 0,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 1000000,
"pin_cpus": [],
"provisioning_timeout_secs": null,
"max_parallel_connector_init": null,
"init_containers": null,
"checkpoint_during_suspend": true,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table1 ( col1 INT );",
"udf_rust": null,
"udf_toml": null,
"program_config": {
"profile": "optimized",
"cache": true,
"runtime_version": null
}
}
}
},
"required": true
},
"responses": {
"201": {
"description": "Pipeline successfully created",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PipelineInfo"
},
"example": {
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"name": "example1",
"description": "Description of the pipeline example1",
"created_at": "1970-01-01T00:00:00Z",
"version": 4,
"platform_version": "v0",
"runtime_config": {
"workers": 16,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": true,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 0,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 1000000,
"pin_cpus": [],
"provisioning_timeout_secs": null,
"max_parallel_connector_init": null,
"init_containers": null,
"checkpoint_during_suspend": true,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table1 ( col1 INT );",
"udf_rust": "",
"udf_toml": "",
"program_config": {
"profile": "optimized",
"cache": true,
"runtime_version": null
},
"program_version": 2,
"program_status": "Pending",
"program_status_since": "1970-01-01T00:00:00Z",
"program_error": {
"sql_compilation": null,
"rust_compilation": null,
"system_error": null
},
"program_info": null,
"deployment_error": null,
"refresh_version": 4,
"storage_status": "Cleared",
"deployment_id": null,
"deployment_initial": null,
"deployment_status": "Stopped",
"deployment_status_since": "1970-01-01T00:00:00Z",
"deployment_desired_status": "Stopped",
"deployment_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_status": "Stopped",
"deployment_resources_status_details": null,
"deployment_resources_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_desired_status": "Stopped",
"deployment_resources_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_runtime_status": null,
"deployment_runtime_status_details": null,
"deployment_runtime_status_since": null,
"deployment_runtime_desired_status": null,
"deployment_runtime_desired_status_since": null
}
}
}
},
"400": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Name does not match pattern": {
"value": {
"message": "Name 'name-with-invalid-char-#' contains characters which are not lowercase (a-z), uppercase (A-Z), numbers (0-9), underscores (_) or hyphens (-)",
"error_code": "NameDoesNotMatchPattern",
"details": {
"name": "name-with-invalid-char-#"
}
}
}
}
}
}
},
"409": {
"description": "Cannot create pipeline as the name already exists",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "An entity with this name already exists",
"error_code": "DuplicateName",
"details": null
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}": {
"get": {
"tags": [
"Pipeline CRUD"
],
"summary": "Get Pipeline",
"description": "Retrieve a pipeline.\nConfigure which fields are included using the `selector` query parameter.",
"operationId": "get_pipeline",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "selector",
"in": "query",
"description": "The `selector` parameter limits which fields are returned for a pipeline.\nLimiting which fields is particularly handy for instance when frequently\nmonitoring over low bandwidth connections while being only interested\nin pipeline status.",
"required": false,
"schema": {
"$ref": "#/components/schemas/PipelineFieldSelector"
}
}
],
"responses": {
"200": {
"description": "Pipeline retrieved successfully",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PipelineSelectedInfo"
},
"example": {
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"name": "example1",
"description": "Description of the pipeline example1",
"created_at": "1970-01-01T00:00:00Z",
"version": 4,
"platform_version": "v0",
"runtime_config": {
"workers": 16,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": true,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 0,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 1000000,
"pin_cpus": [],
"provisioning_timeout_secs": null,
"max_parallel_connector_init": null,
"init_containers": null,
"checkpoint_during_suspend": true,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table1 ( col1 INT );",
"udf_rust": "",
"udf_toml": "",
"program_config": {
"profile": "optimized",
"cache": true,
"runtime_version": null
},
"program_version": 2,
"program_status": "Pending",
"program_status_since": "1970-01-01T00:00:00Z",
"program_error": {
"sql_compilation": null,
"rust_compilation": null,
"system_error": null
},
"program_info": null,
"deployment_error": null,
"refresh_version": 4,
"storage_status": "Cleared",
"deployment_id": null,
"deployment_initial": null,
"deployment_status": "Stopped",
"deployment_status_since": "1970-01-01T00:00:00Z",
"deployment_desired_status": "Stopped",
"deployment_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_status": "Stopped",
"deployment_resources_status_details": null,
"deployment_resources_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_desired_status": "Stopped",
"deployment_resources_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_runtime_status": null,
"deployment_runtime_status_details": null,
"deployment_runtime_status_since": null,
"deployment_runtime_desired_status": null,
"deployment_runtime_desired_status_since": null
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
},
"put": {
"tags": [
"Pipeline CRUD"
],
"summary": "Upsert Pipeline",
"description": "Fully update a pipeline if it already exists, otherwise create a new pipeline.",
"operationId": "put_pipeline",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PostPutPipeline"
},
"example": {
"name": "example1",
"description": "Description of the pipeline example1",
"runtime_config": {
"workers": 16,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": true,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 0,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 1000000,
"pin_cpus": [],
"provisioning_timeout_secs": null,
"max_parallel_connector_init": null,
"init_containers": null,
"checkpoint_during_suspend": true,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table1 ( col1 INT );",
"udf_rust": null,
"udf_toml": null,
"program_config": {
"profile": "optimized",
"cache": true,
"runtime_version": null
}
}
}
},
"required": true
},
"responses": {
"200": {
"description": "Pipeline successfully updated",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PipelineInfo"
},
"example": {
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"name": "example1",
"description": "Description of the pipeline example1",
"created_at": "1970-01-01T00:00:00Z",
"version": 4,
"platform_version": "v0",
"runtime_config": {
"workers": 16,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": true,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 0,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 1000000,
"pin_cpus": [],
"provisioning_timeout_secs": null,
"max_parallel_connector_init": null,
"init_containers": null,
"checkpoint_during_suspend": true,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table1 ( col1 INT );",
"udf_rust": "",
"udf_toml": "",
"program_config": {
"profile": "optimized",
"cache": true,
"runtime_version": null
},
"program_version": 2,
"program_status": "Pending",
"program_status_since": "1970-01-01T00:00:00Z",
"program_error": {
"sql_compilation": null,
"rust_compilation": null,
"system_error": null
},
"program_info": null,
"deployment_error": null,
"refresh_version": 4,
"storage_status": "Cleared",
"deployment_id": null,
"deployment_initial": null,
"deployment_status": "Stopped",
"deployment_status_since": "1970-01-01T00:00:00Z",
"deployment_desired_status": "Stopped",
"deployment_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_status": "Stopped",
"deployment_resources_status_details": null,
"deployment_resources_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_desired_status": "Stopped",
"deployment_resources_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_runtime_status": null,
"deployment_runtime_status_details": null,
"deployment_runtime_status_since": null,
"deployment_runtime_desired_status": null,
"deployment_runtime_desired_status_since": null
}
}
}
},
"201": {
"description": "Pipeline successfully created",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PipelineInfo"
},
"example": {
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"name": "example1",
"description": "Description of the pipeline example1",
"created_at": "1970-01-01T00:00:00Z",
"version": 4,
"platform_version": "v0",
"runtime_config": {
"workers": 16,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": true,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 0,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 1000000,
"pin_cpus": [],
"provisioning_timeout_secs": null,
"max_parallel_connector_init": null,
"init_containers": null,
"checkpoint_during_suspend": true,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table1 ( col1 INT );",
"udf_rust": "",
"udf_toml": "",
"program_config": {
"profile": "optimized",
"cache": true,
"runtime_version": null
},
"program_version": 2,
"program_status": "Pending",
"program_status_since": "1970-01-01T00:00:00Z",
"program_error": {
"sql_compilation": null,
"rust_compilation": null,
"system_error": null
},
"program_info": null,
"deployment_error": null,
"refresh_version": 4,
"storage_status": "Cleared",
"deployment_id": null,
"deployment_initial": null,
"deployment_status": "Stopped",
"deployment_status_since": "1970-01-01T00:00:00Z",
"deployment_desired_status": "Stopped",
"deployment_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_status": "Stopped",
"deployment_resources_status_details": null,
"deployment_resources_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_desired_status": "Stopped",
"deployment_resources_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_runtime_status": null,
"deployment_runtime_status_details": null,
"deployment_runtime_status_since": null,
"deployment_runtime_desired_status": null,
"deployment_runtime_desired_status_since": null
}
}
}
},
"400": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Cannot update non-stopped pipeline": {
"value": {
"message": "Pipeline can only be updated while stopped. Stop it first by invoking '/stop'.",
"error_code": "UpdateRestrictedToStopped",
"details": null
}
},
"Name does not match pattern": {
"value": {
"message": "Name 'name-with-invalid-char-#' contains characters which are not lowercase (a-z), uppercase (A-Z), numbers (0-9), underscores (_) or hyphens (-)",
"error_code": "NameDoesNotMatchPattern",
"details": {
"name": "name-with-invalid-char-#"
}
}
}
}
}
}
},
"409": {
"description": "Cannot rename pipeline as the new name already exists",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "An entity with this name already exists",
"error_code": "DuplicateName",
"details": null
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
},
"delete": {
"tags": [
"Pipeline CRUD"
],
"summary": "Delete Pipeline",
"description": "Delete an existing pipeline by name.",
"operationId": "delete_pipeline",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Pipeline successfully deleted"
},
"400": {
"description": "Pipeline must be fully stopped and cleared to be deleted",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Cannot delete a pipeline which is not fully stopped. Stop the pipeline first fully by invoking the '/stop' endpoint.",
"error_code": "DeleteRestrictedToFullyStopped",
"details": null
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
},
"patch": {
"tags": [
"Pipeline CRUD"
],
"summary": "Patch Pipeline",
"description": "Partially update a pipeline.",
"operationId": "patch_pipeline",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"requestBody": {
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PatchPipeline"
},
"example": {
"name": null,
"description": "This is a new description",
"runtime_config": null,
"program_code": "CREATE TABLE table3 ( col3 INT );",
"udf_rust": null,
"udf_toml": null,
"program_config": null
}
}
},
"required": true
},
"responses": {
"200": {
"description": "Pipeline successfully updated",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PipelineInfo"
},
"example": {
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"name": "example1",
"description": "Description of the pipeline example1",
"created_at": "1970-01-01T00:00:00Z",
"version": 4,
"platform_version": "v0",
"runtime_config": {
"workers": 16,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": true,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 0,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 1000000,
"pin_cpus": [],
"provisioning_timeout_secs": null,
"max_parallel_connector_init": null,
"init_containers": null,
"checkpoint_during_suspend": true,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table1 ( col1 INT );",
"udf_rust": "",
"udf_toml": "",
"program_config": {
"profile": "optimized",
"cache": true,
"runtime_version": null
},
"program_version": 2,
"program_status": "Pending",
"program_status_since": "1970-01-01T00:00:00Z",
"program_error": {
"sql_compilation": null,
"rust_compilation": null,
"system_error": null
},
"program_info": null,
"deployment_error": null,
"refresh_version": 4,
"storage_status": "Cleared",
"deployment_id": null,
"deployment_initial": null,
"deployment_status": "Stopped",
"deployment_status_since": "1970-01-01T00:00:00Z",
"deployment_desired_status": "Stopped",
"deployment_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_status": "Stopped",
"deployment_resources_status_details": null,
"deployment_resources_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_desired_status": "Stopped",
"deployment_resources_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_runtime_status": null,
"deployment_runtime_status_details": null,
"deployment_runtime_status_since": null,
"deployment_runtime_desired_status": null,
"deployment_runtime_desired_status_since": null
}
}
}
},
"400": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Cannot update non-stopped pipeline": {
"value": {
"message": "Pipeline can only be updated while stopped. Stop it first by invoking '/stop'.",
"error_code": "UpdateRestrictedToStopped",
"details": null
}
},
"Name does not match pattern": {
"value": {
"message": "Name 'name-with-invalid-char-#' contains characters which are not lowercase (a-z), uppercase (A-Z), numbers (0-9), underscores (_) or hyphens (-)",
"error_code": "NameDoesNotMatchPattern",
"details": {
"name": "name-with-invalid-char-#"
}
}
}
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"409": {
"description": "Cannot rename pipeline as the name already exists",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "An entity with this name already exists",
"error_code": "DuplicateName",
"details": null
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/activate": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Activate Standby Pipeline",
"description": "Requests the pipeline to activate if it is currently in standby mode, which it will do\nasynchronously.\n\nProgress should be monitored by polling the pipeline `GET` endpoints.\n\nThis endpoint is only applicable when the pipeline is configured to start\nfrom object store and started as standby.",
"operationId": "post_pipeline_activate",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "initial",
"in": "query",
"required": false,
"schema": {
"$ref": "#/components/schemas/RuntimeDesiredStatus"
}
}
],
"responses": {
"202": {
"description": "Pipeline activation initiated",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CheckpointResponse"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/approve": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Approve Bootstrap",
"description": "Approves the pipeline to proceed with bootstrapping.\n\nThis endpoint is used when a pipeline has been started with\n`bootstrap_policy=await_approval`, it is resuming from an existing checkpoint,\nbut the pipeline has been modified since the checkpoint was made and is\ncurrently in the `AwaitingApproval` state awaiting user approval to proceed\nwith bootstrapping.",
"operationId": "post_pipeline_approve",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"202": {
"description": "Pipeline activation initiated",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CheckpointResponse"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/checkpoint": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Checkpoint Now",
"description": "Initiates checkpoint for a running or paused pipeline.\n\nReturns a checkpoint sequence number that can be used with `/checkpoint_status` to\ndetermine when the checkpoint has completed.",
"operationId": "checkpoint_pipeline",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Checkpoint initiated",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CheckpointResponse"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/checkpoint/sync": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Sync Checkpoints To S3",
"description": "Syncs latest checkpoints to the object store configured in pipeline config.",
"operationId": "sync_checkpoint",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Checkpoint synced to object store",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CheckpointResponse"
}
}
}
},
"404": {
"description": "No checkpoints found",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/checkpoint/sync_status": {
"get": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Get Checkpoint Sync Status",
"description": "Retrieve status of checkpoint sync activity in a pipeline.",
"operationId": "get_checkpoint_sync_status",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Checkpoint sync status retrieved successfully",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CheckpointStatus"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/checkpoint_status": {
"get": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Get Checkpoint Status",
"description": "Retrieve status of checkpoint activity in a pipeline.",
"operationId": "get_checkpoint_status",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Checkpoint status retrieved successfully",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CheckpointStatus"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/checkpoints": {
"get": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Get the checkpoints for a pipeline",
"description": "Retrieve the current checkpoints made by a pipeline.",
"operationId": "get_checkpoints",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Checkpoints retrieved successfully",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CheckpointMetadata"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/circuit_json_profile": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Performance Profile JSON",
"description": "Retrieve the circuit performance profile in JSON format of a running or paused pipeline.",
"operationId": "get_pipeline_circuit_json_profile",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Circuit performance profile in JSON format",
"content": {
"application/json": {
"schema": {
"type": "object"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/circuit_profile": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Get Performance Profile",
"description": "Retrieve the circuit performance profile of a running or paused pipeline.",
"operationId": "get_pipeline_circuit_profile",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Circuit performance profile",
"content": {
"application/zip": {
"schema": {
"type": "object"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/clear": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Clear Storage",
"description": "Clears the pipeline storage asynchronously.\n\nIMPORTANT: Clearing means disassociating the storage from the pipeline.\nDepending on the storage type this can include its deletion.\n\nIt sets the storage state to `Clearing`, after which the clearing process is\nperformed asynchronously. Progress should be monitored by polling the pipeline\nusing the `GET` endpoints. An `/clear` cannot be cancelled.",
"operationId": "post_pipeline_clear",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"202": {
"description": "Action is accepted and is being performed"
},
"400": {
"description": "Action could not be performed",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Illegal action": {
"value": {
"message": "Cannot transition from storage status 'cleared' to 'clearing'",
"error_code": "InvalidStorageStatusTransition",
"details": {
"current_status": "Cleared",
"new_status": "Clearing"
}
}
}
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/commit_transaction": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Commit Transaction",
"description": "Commit the current transaction.",
"operationId": "commit_transaction",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Commit operation initiated."
},
"409": {
"description": "Another transaction is already in progress.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/completion_status": {
"get": {
"tags": [
"Input Connectors"
],
"summary": "Check Completion Status",
"description": "Check the status of a completion token returned by the `/ingress` or `/completion_token`\nendpoint.",
"operationId": "completion_status",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "token",
"in": "query",
"description": "Completion token returned by the '/ingress' or '/completion_status' endpoint.",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "The pipeline has finished processing inputs associated with the provided completion token.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CompletionStatusResponse"
}
}
}
},
"202": {
"description": "The pipeline is still processing inputs associated with the provided completion token.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CompletionStatusResponse"
}
}
}
},
"400": {
"description": "An invalid completion token was provided",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"410": {
"description": "Completion token was created by a previous incarnation of the pipeline and is not valid for the current incarnation. This indicates that the pipeline was suspended and resumed from a checkpoint or restarted after a failure.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/dataflow_graph": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Get Dataflow Graph",
"description": "Retrieve the dataflow graph of a pipeline.\nThe dataflow graph is generated during SQL compilation and shows the structure\nof the compiled SQL program including the Calcite plan and MIR nodes.",
"operationId": "get_pipeline_dataflow_graph",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Dataflow graph retrieved successfully",
"content": {
"application/json": {
"schema": {
"type": "object"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist or dataflow graph is not available",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/egress/{table_name}": {
"post": {
"tags": [
"Output Connectors"
],
"summary": "Subscribe to View",
"description": "Subscribe to a stream of updates from a SQL view or table.\n\nThe pipeline responds with a continuous stream of changes to the specified\ntable or view, encoded using the format specified in the `?format=`\nparameter. Updates are split into `Chunk`s.\n\nThe pipeline continues sending updates until the client closes the\nconnection or the pipeline is stopped.",
"operationId": "http_output",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "table_name",
"in": "path",
"description": "SQL table name. Unquoted SQL names have to be capitalized. Quoted SQL names have to exactly match the case from the SQL program.",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "format",
"in": "query",
"description": "Output data format, e.g., 'csv' or 'json'.",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "array",
"in": "query",
"description": "Set to `true` to group updates in this stream into JSON arrays (used in conjunction with `format=json`). The default value is `false`",
"required": false,
"schema": {
"type": "boolean",
"nullable": true
}
},
{
"name": "backpressure",
"in": "query",
"description": "Apply backpressure on the pipeline when the HTTP client cannot receive data fast enough.\n When this flag is set to false (the default), the HTTP connector drops data chunks if the client is not keeping up with its output. This prevents a slow HTTP client from slowing down the entire pipeline.\n When the flag is set to true, the connector waits for the client to receive each chunk and blocks the pipeline if the client cannot keep up.",
"required": false,
"schema": {
"type": "boolean",
"nullable": true
}
}
],
"responses": {
"200": {
"description": "Connection to the endpoint successfully established. The body of the response contains a stream of data chunks.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Chunk"
}
}
}
},
"400": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"404": {
"description": "Pipeline and/or table/view with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Pipeline with that name does not exist": {
"value": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/heap_profile": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Get Heap Profile",
"description": "Retrieve the heap profile of a running or paused pipeline.",
"operationId": "get_pipeline_heap_profile",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Heap usage profile as a gzipped protobuf that can be inspected by the pprof tool",
"content": {
"application/protobuf": {
"schema": {
"type": "string",
"format": "binary"
}
}
}
},
"400": {
"description": "Getting a heap profile is not supported on this platform",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/ingress/{table_name}": {
"post": {
"tags": [
"Input Connectors"
],
"summary": "Insert Data",
"description": "Push data to a SQL table.\n\nThe client sends data encoded using the format specified in the `?format=`\nparameter as a body of the request. The contents of the data must match\nthe SQL table schema specified in `table_name`\n\nThe pipeline ingests data as it arrives without waiting for the end of\nthe request. Successful HTTP response indicates that all data has been\ningested successfully.\n\nOn success, returns a completion token that can be passed to the\n'/completion_status' endpoint to check whether the pipeline has fully\nprocessed the data.",
"operationId": "http_input",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "table_name",
"in": "path",
"description": "SQL table name. Unquoted SQL names have to be capitalized. Quoted SQL names have to exactly match the case from the SQL program.",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "force",
"in": "query",
"description": "When `true`, push data to the pipeline even if the pipeline is paused. The default value is `false`",
"required": true,
"schema": {
"type": "boolean"
}
},
{
"name": "format",
"in": "query",
"description": "Input data format, e.g., 'csv' or 'json'.",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "array",
"in": "query",
"description": "Set to `true` if updates in this stream are packaged into JSON arrays (used in conjunction with `format=json`). The default values is `false`.",
"required": false,
"schema": {
"type": "boolean",
"nullable": true
}
},
{
"name": "update_format",
"in": "query",
"description": "JSON data change event format (used in conjunction with `format=json`). The default value is 'insert_delete'.",
"required": false,
"schema": {
"allOf": [
{
"$ref": "#/components/schemas/JsonUpdateFormat"
}
],
"nullable": true
}
}
],
"requestBody": {
"description": "Input data in the specified format",
"content": {
"text/plain": {
"schema": {
"type": "string"
}
}
},
"required": true
},
"responses": {
"200": {
"description": "Data successfully delivered to the pipeline. The body of the response contains a completion token that can be passed to the '/completion_status' endpoint to check whether the pipeline has fully processed the data.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CompletionTokenResponse"
}
}
}
},
"400": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"404": {
"description": "Pipeline and/or table with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Pipeline with that name does not exist": {
"value": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/logs": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Stream Pipeline Logs",
"description": "Retrieve logs of a pipeline as a stream.\n\nThe logs stream catches up to the extent of the internally configured per-pipeline\ncircular logs buffer (limited to a certain byte size and number of lines, whichever\nis reached first). After the catch-up, new lines are pushed whenever they become\navailable.\n\nIt is possible for the logs stream to end prematurely due to the API server temporarily losing\nconnection to the runner. In this case, it is needed to issue again a new request to this\nendpoint.\n\nThe logs stream will end when the pipeline is deleted, or if the runner restarts. Note that in\nboth cases the logs will be cleared.",
"operationId": "get_pipeline_logs",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Pipeline logs retrieved successfully",
"content": {
"text/plain": {
"schema": {
"type": "string",
"format": "binary"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Runner response timeout": {
"value": {
"message": "Unable to reach pipeline runner to interact due to: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)",
"error_code": "RunnerInteractionUnreachable",
"details": {
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/metrics": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Get Pipeline Metrics",
"description": "Retrieve the metrics of a running or paused pipeline.",
"operationId": "get_pipeline_metrics",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "format",
"in": "query",
"required": false,
"schema": {
"$ref": "#/components/schemas/MetricsFormat"
}
}
],
"responses": {
"200": {
"description": "Pipeline circuit metrics retrieved successfully",
"content": {
"application/octet-stream": {
"schema": {
"type": "string",
"format": "binary"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/pause": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Pause Pipeline",
"description": "Requests the pipeline to pause, which it will do asynchronously.\n\nProgress should be monitored by polling the pipeline `GET` endpoints.",
"operationId": "post_pipeline_pause",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"202": {
"description": "Action is accepted and is being performed"
},
"400": {
"description": "Action could not be performed",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Deployment resources status (current: 'Stopping', desired: 'Stopped') cannot have desired changed to 'Provisioned'. Cannot restart the pipeline while it is stopping. Wait for it to stop before starting a new instance of the pipeline.",
"error_code": "IllegalPipelineAction",
"details": {
"status": "Stopping",
"current_desired_status": "Stopped",
"new_desired_status": "Provisioned",
"hint": "Cannot restart the pipeline while it is stopping. Wait for it to stop before starting a new instance of the pipeline."
}
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/query": {
"get": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Execute Ad-hoc SQL",
"description": "Execute ad-hoc SQL in a running or paused pipeline.\n\nThe evaluation is not incremental.",
"operationId": "pipeline_adhoc_sql",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "sql",
"in": "query",
"description": "SQL query to execute",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "format",
"in": "query",
"description": "Input data format, e.g., 'text', 'json' or 'parquet'",
"required": true,
"schema": {
"$ref": "#/components/schemas/AdHocResultFormat"
}
}
],
"responses": {
"200": {
"description": "Ad-hoc SQL query result",
"content": {
"text/plain": {
"schema": {
"type": "string",
"format": "binary"
}
}
}
},
"400": {
"description": "Invalid SQL query",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/rebalance": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Initiate rebalancing.",
"description": "Initiate immediate rebalancing of the pipeline. Normally rebalancing is initiated automatically\nwhen the drift in the size of joined relations exceeds a threshold. This endpoint forces the balancer\nto reevaluate and apply an optimal partitioning policy regardless of the threshold.\n\nThis operation is a no-op unless the `adaptive_joins` feature is enabled in `dev_tweaks`.",
"operationId": "post_pipeline_rebalance",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Rebalancing started successfully"
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/resume": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Resume Pipeline",
"description": "Requests the pipeline to resume, which it will do asynchronously.\n\nProgress should be monitored by polling the pipeline `GET` endpoints.",
"operationId": "post_pipeline_resume",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"202": {
"description": "Action is accepted and is being performed"
},
"400": {
"description": "Action could not be performed",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Deployment resources status (current: 'Stopping', desired: 'Stopped') cannot have desired changed to 'Provisioned'. Cannot restart the pipeline while it is stopping. Wait for it to stop before starting a new instance of the pipeline.",
"error_code": "IllegalPipelineAction",
"details": {
"status": "Stopping",
"current_desired_status": "Stopped",
"new_desired_status": "Provisioned",
"hint": "Cannot restart the pipeline while it is stopping. Wait for it to stop before starting a new instance of the pipeline."
}
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/samply_profile": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Get Samply Profile",
"description": "Retrieve the last samply profile of a pipeline, regardless of whether profiling is currently in progress.\nIf ?latest parameter is specified and Samply profile collection is in progress, returns HTTP 307 with Retry-After header.",
"operationId": "get_pipeline_samply_profile",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "ordinal",
"in": "query",
"description": "In a multihost pipeline, the ordinal of the pipeline to sample.",
"required": false,
"schema": {
"type": "integer",
"minimum": 0
}
},
{
"name": "latest",
"in": "query",
"description": "If true, returns 204 redirect with Retry-After header if profile collection is in progress.\nIf false or not provided, returns the last collected profile.",
"required": false,
"schema": {
"type": "boolean"
}
}
],
"responses": {
"200": {
"description": "Samply profile as a gzip containing the profile that can be inspected by the samply tool. Note: may return 204 No Content with Retry-After header if latest=true and profiling is in progress.",
"content": {
"application/gzip": {
"schema": {
"type": "string",
"format": "binary"
}
}
}
},
"400": {
"description": "No samply profile exists for the pipeline, create one by calling `POST /pipelines/{pipeline_name}/samply_profile?duration_secs=30`",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
},
"post": {
"tags": [
"Metrics & Debugging"
],
"summary": "Start a Samply profile",
"description": "Profile the pipeline using the Samply profiler for the next `duration_secs` seconds.",
"operationId": "start_samply_profile",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "ordinal",
"in": "query",
"description": "In a multihost pipeline, the ordinal of the pipeline to sample.",
"required": false,
"schema": {
"type": "integer",
"minimum": 0
}
},
{
"name": "duration_secs",
"in": "query",
"description": "The number of seconds to sample for the profile.",
"required": false,
"schema": {
"type": "integer",
"format": "int64",
"minimum": 0
}
}
],
"responses": {
"202": {
"description": "Started profiling the pipeline with the Samply tool"
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"409": {
"description": "Samply profile collection is already in progress",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/start": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Start Pipeline",
"description": "Start the pipeline asynchronously by updating the desired status.\n\nThe endpoint returns immediately after setting the desired status.\nThe procedure to get to the desired status is performed asynchronously.\nProgress should be monitored by polling the pipeline `GET` endpoints.\n\nNote the following:\n- A stopped pipeline can be started through calling `/start?initial=running`,\n`/start?initial=paused`, or `/start?initial=standby`.\n- If the pipeline is already (being) started (provisioned), it will still return success\n- It is not possible to call `/start` when the pipeline has already had `/stop` called and is\nin the process of suspending or stopping.",
"operationId": "post_pipeline_start",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "initial",
"in": "query",
"description": "The `initial` parameter determines whether to after provisioning the pipeline make it\nbecome `standby`, `paused` or `running` (only valid values).",
"required": false,
"schema": {
"type": "string"
}
},
{
"name": "bootstrap_policy",
"in": "query",
"required": false,
"schema": {
"$ref": "#/components/schemas/BootstrapPolicy"
}
}
],
"responses": {
"202": {
"description": "Action is accepted and is being performed"
},
"400": {
"description": "Action could not be performed",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Deployment resources status (current: 'Stopping', desired: 'Stopped') cannot have desired changed to 'Provisioned'. Cannot restart the pipeline while it is stopping. Wait for it to stop before starting a new instance of the pipeline.",
"error_code": "IllegalPipelineAction",
"details": {
"status": "Stopping",
"current_desired_status": "Stopped",
"new_desired_status": "Provisioned",
"hint": "Cannot restart the pipeline while it is stopping. Wait for it to stop before starting a new instance of the pipeline."
}
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/start_transaction": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Begin Transaction",
"description": "Start a new transaction.",
"operationId": "start_transaction",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Transaction successfully started.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/StartTransactionResponse"
}
}
}
},
"409": {
"description": "Another transaction is already in progress.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/stats": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Get Pipeline Stats",
"description": "Retrieve statistics (e.g., performance counters) of a running or paused pipeline.",
"operationId": "get_pipeline_stats",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Pipeline statistics retrieved successfully",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ControllerStatus"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/stop": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Stop Pipeline",
"description": "Stop the pipeline asynchronously by updating the desired state.\n\nThere are two variants:\n- `/stop?force=false` (default): the pipeline will first atomically checkpoint before\ndeprovisioning the compute resources. When resuming, the pipeline will start from this\n- `/stop?force=true`: the compute resources will be immediately deprovisioned. When resuming,\nit will pick up the latest checkpoint made by the periodic checkpointer or by a prior\n`/checkpoint` call.\n\nThe endpoint returns immediately after setting the desired state to `Suspended` for\n`?force=false` or `Stopped` for `?force=true`. In the former case, once the pipeline has\nsuccessfully passes the `Suspending` state, the desired state will become `Stopped` as well.\nThe procedure to get to the desired state is performed asynchronously. Progress should be\nmonitored by polling the pipeline `GET` endpoints.\n\nNote the following:\n- The suspending that is done with `/stop?force=false` is not guaranteed to succeed:\n- If an error is returned during the suspension, the pipeline will be forcefully stopped with\nthat error set\n- Otherwise, it will keep trying to suspend, in which case it is possible to cancel suspending\nby calling `/stop?force=true`\n- `/stop?force=true` cannot be cancelled: the pipeline must first reach `Stopped` before another\naction can be done\n- A pipeline which is in the process of suspending or stopping can only be forcefully stopped",
"operationId": "post_pipeline_stop",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "force",
"in": "query",
"description": "The `force` parameter determines whether to immediately deprovision the pipeline compute\nresources (`force=true`) or first attempt to atomically checkpoint before doing so\n(`force=false`, which is the default).",
"required": false,
"schema": {
"type": "boolean"
}
}
],
"responses": {
"202": {
"description": "Action is accepted and is being performed"
},
"400": {
"description": "Action could not be performed",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Deployment resources status (current: 'Stopping', desired: 'Stopped') cannot have desired changed to 'Provisioned'. Cannot restart the pipeline while it is stopping. Wait for it to stop before starting a new instance of the pipeline.",
"error_code": "IllegalPipelineAction",
"details": {
"status": "Stopping",
"current_desired_status": "Stopped",
"new_desired_status": "Provisioned",
"hint": "Cannot restart the pipeline while it is stopping. Wait for it to stop before starting a new instance of the pipeline."
}
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"405": {
"description": "Action is not supported",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Unsupported action": {
"value": {
"message": "Unsupported pipeline action 'suspend': this pipeline does not support the suspend action for the following reason(s):\n - Storage must be configured",
"error_code": "UnsupportedPipelineAction",
"details": {
"action": "suspend",
"reason": "this pipeline does not support the suspend action for the following reason(s):\n - Storage must be configured"
}
}
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"501": {
"description": "Action is not implemented because it is only available in the Enterprise edition",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "Action can not be performed (maybe because the pipeline is already suspended)",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/support_bundle": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Download Support Bundle",
"description": "Generate a support bundle for a pipeline.\n\nThis endpoint collects various diagnostic data from the pipeline including\ncircuit profile, heap profile, metrics, logs, stats, and connector statistics,\nand packages them into a single ZIP file for support purposes.",
"operationId": "get_pipeline_support_bundle",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "collect",
"in": "query",
"description": "Whether to collect new data from the running pipeline (default: true)\nWhen false, only previously collected data will be included in the bundle",
"required": false,
"schema": {
"type": "boolean"
}
},
{
"name": "circuit_profile",
"in": "query",
"description": "Whether to collect circuit profile data (default: true)",
"required": false,
"schema": {
"type": "boolean"
}
},
{
"name": "heap_profile",
"in": "query",
"description": "Whether to collect heap profile data (default: true)",
"required": false,
"schema": {
"type": "boolean"
}
},
{
"name": "metrics",
"in": "query",
"description": "Whether to collect metrics data (default: true)",
"required": false,
"schema": {
"type": "boolean"
}
},
{
"name": "logs",
"in": "query",
"description": "Whether to collect logs data (default: true)",
"required": false,
"schema": {
"type": "boolean"
}
},
{
"name": "stats",
"in": "query",
"description": "Whether to collect stats data (default: true)",
"required": false,
"schema": {
"type": "boolean"
}
},
{
"name": "pipeline_config",
"in": "query",
"description": "Whether to collect pipeline configuration data (default: true)",
"required": false,
"schema": {
"type": "boolean"
}
},
{
"name": "system_config",
"in": "query",
"description": "Whether to collect system configuration data (default: true)",
"required": false,
"schema": {
"type": "boolean"
}
},
{
"name": "dataflow_graph",
"in": "query",
"description": "Whether to collect dataflow graph data (default: true)",
"required": false,
"schema": {
"type": "boolean"
}
}
],
"responses": {
"200": {
"description": "Support bundle containing diagnostic information",
"content": {
"application/zip": {
"schema": {
"type": "string",
"format": "binary"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/tables/{table_name}/connectors/{connector_name}/completion_token": {
"get": {
"tags": [
"Input Connectors"
],
"summary": "Get Completion Token",
"description": "Generate a completion token for an input connector.\n\nReturns a token that can be passed to the `/completion_status` endpoint\nto check whether the pipeline has finished processing all inputs received from the\nconnector before the token was generated.",
"operationId": "completion_token",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "table_name",
"in": "path",
"description": "SQL table name. Unquoted SQL names have to be capitalized. Quoted SQL names have to exactly match the case from the SQL program.",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "connector_name",
"in": "path",
"description": "Unique input connector name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Completion token that can be passed to the '/completion_status' endpoint.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/CompletionTokenResponse"
}
}
}
},
"404": {
"description": "Specified pipeline, table, or connector does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/tables/{table_name}/connectors/{connector_name}/stats": {
"get": {
"tags": [
"Input Connectors"
],
"summary": "Get Input Status",
"description": "Retrieve the status of an input connector.",
"operationId": "get_pipeline_input_connector_status",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "table_name",
"in": "path",
"description": "Unique table name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "connector_name",
"in": "path",
"description": "Unique input connector name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Input connector status retrieved successfully",
"content": {
"application/json": {
"schema": {
"type": "object"
}
}
}
},
"404": {
"description": "Pipeline, table and/or input connector with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Pipeline with that name does not exist": {
"value": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/tables/{table_name}/connectors/{connector_name}/{action}": {
"post": {
"tags": [
"Input Connectors"
],
"summary": "Control Input Connector",
"description": "Start (resume) or pause the input connector.\n\nThe following values of the `action` argument are accepted: `start` and `pause`.\n\nInput connectors can be in either the `Running` or `Paused` state. By default,\nconnectors are initialized in the `Running` state when a pipeline is deployed.\nIn this state, the connector actively fetches data from its configured data\nsource and forwards it to the pipeline. If needed, a connector can be created\nin the `Paused` state by setting its\n[`paused`](https://docs.feldera.com/connectors/#generic-attributes) property\nto `true`. When paused, the connector remains idle until reactivated using the\n`start` command. Conversely, a connector in the `Running` state can be paused\nat any time by issuing the `pause` command.\n\nThe current connector state can be retrieved via the\n`GET /v0/pipelines/{pipeline_name}/stats` endpoint.\n\nNote that only if both the pipeline *and* the connector state is `Running`,\nis the input connector active.\n```text\nPipeline state Connector state Connector is active?\n-------------- --------------- --------------------\nPaused Paused No\nPaused Running No\nRunning Paused No\nRunning Running Yes\n```",
"operationId": "post_pipeline_input_connector_action",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "table_name",
"in": "path",
"description": "SQL table name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "connector_name",
"in": "path",
"description": "Input connector name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "action",
"in": "path",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Action has been processed"
},
"404": {
"description": "Pipeline, table and/or input connector with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Pipeline with that name does not exist": {
"value": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/testing": {
"post": {
"tags": [
"Metrics & Debugging"
],
"summary": "Test Endpoint",
"description": "This endpoint is used as part of the test harness. Only available if the `testing`\nunstable feature is enabled. Do not use in production.",
"operationId": "post_pipeline_testing",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "set_platform_version",
"in": "query",
"required": false,
"schema": {
"type": "string",
"nullable": true
}
}
],
"responses": {
"200": {
"description": "Request successfully processed"
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"405": {
"description": "Endpoint is disabled. Set FELDERA_UNSTABLE_FEATURES=\"testing\" to enable.",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/time_series": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Get Time Series Stats",
"description": "Retrieve time series for statistics of a running or paused pipeline.",
"operationId": "get_pipeline_time_series",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Pipeline time series retrieved successfully",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/TimeSeries"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/time_series_stream": {
"get": {
"tags": [
"Metrics & Debugging"
],
"summary": "Stream Time Series",
"description": "Stream time series for statistics of a running or paused pipeline.\n\nReturns a snapshot of all existing time series data followed by a continuous stream of\nnew time series data points as they become available. The response is in newline-delimited\nJSON format (NDJSON) where each line is a JSON object representing a single time series\ndata point.",
"operationId": "get_pipeline_time_series_stream",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Pipeline time series stream established successfully",
"content": {
"application/x-ndjson": {
"schema": {
"type": "string"
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/update_runtime": {
"post": {
"tags": [
"Pipeline Lifecycle"
],
"summary": "Recompile Pipeline",
"description": "Recompile a pipeline with the Feldera runtime version included in the\ncurrently installed Feldera platform.\n\nUse this endpoint after upgrading Feldera to rebuild pipelines that were\ncompiled with older platform versions. In most cases, recompilation is not\nrequired; pipelines compiled with older versions will continue to run on the\nupgraded platform.\n\nSituations where recompilation may be necessary:\n- To benefit from the latest bug fixes and performance optimizations.\n- When backward-incompatible changes are introduced in Feldera. In this case,\nattempting to start a pipeline compiled with an unsupported version will\nresult in an error.\n\nIf the pipeline is already compiled with the current platform version,\nthis operation is a no-op.\n\nNote that recompiling the pipeline with a new platform version may change its\nquery plan. If the modified pipeline is started from an existing checkpoint,\nit may require bootstrapping parts of its state from scratch. See Feldera\ndocumentation for details on the bootstrapping process.",
"operationId": "post_update_runtime",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Pipeline successfully updated",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/PipelineInfo"
},
"example": {
"id": "67e55044-10b1-426f-9247-bb680e5fe0c8",
"name": "example1",
"description": "Description of the pipeline example1",
"created_at": "1970-01-01T00:00:00Z",
"version": 4,
"platform_version": "v0",
"runtime_config": {
"workers": 16,
"hosts": 1,
"storage": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"fault_tolerance": {
"model": "none",
"checkpoint_interval_secs": 60
},
"cpu_profiler": true,
"tracing": false,
"tracing_endpoint_jaeger": "",
"min_batch_size_records": 0,
"max_buffering_delay_usecs": 0,
"resources": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
},
"clock_resolution_usecs": 1000000,
"pin_cpus": [],
"provisioning_timeout_secs": null,
"max_parallel_connector_init": null,
"init_containers": null,
"checkpoint_during_suspend": true,
"http_workers": null,
"io_workers": null,
"dev_tweaks": {},
"logging": null,
"pipeline_template_configmap": null
},
"program_code": "CREATE TABLE table1 ( col1 INT );",
"udf_rust": "",
"udf_toml": "",
"program_config": {
"profile": "optimized",
"cache": true,
"runtime_version": null
},
"program_version": 2,
"program_status": "Pending",
"program_status_since": "1970-01-01T00:00:00Z",
"program_error": {
"sql_compilation": null,
"rust_compilation": null,
"system_error": null
},
"program_info": null,
"deployment_error": null,
"refresh_version": 4,
"storage_status": "Cleared",
"deployment_id": null,
"deployment_initial": null,
"deployment_status": "Stopped",
"deployment_status_since": "1970-01-01T00:00:00Z",
"deployment_desired_status": "Stopped",
"deployment_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_status": "Stopped",
"deployment_resources_status_details": null,
"deployment_resources_status_since": "1970-01-01T00:00:00Z",
"deployment_resources_desired_status": "Stopped",
"deployment_resources_desired_status_since": "1970-01-01T00:00:00Z",
"deployment_runtime_status": null,
"deployment_runtime_status_details": null,
"deployment_runtime_status_since": null,
"deployment_runtime_desired_status": null,
"deployment_runtime_desired_status_since": null
}
}
}
},
"400": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Cannot update non-stopped pipeline": {
"value": {
"message": "Pipeline can only be updated while stopped. Stop it first by invoking '/stop'.",
"error_code": "UpdateRestrictedToStopped",
"details": null
}
},
"Name does not match pattern": {
"value": {
"message": "Name 'name-with-invalid-char-#' contains characters which are not lowercase (a-z), uppercase (A-Z), numbers (0-9), underscores (_) or hyphens (-)",
"error_code": "NameDoesNotMatchPattern",
"details": {
"name": "name-with-invalid-char-#"
}
}
}
}
}
}
},
"404": {
"description": "Pipeline with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"example": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
},
"/v0/pipelines/{pipeline_name}/views/{view_name}/connectors/{connector_name}/stats": {
"get": {
"tags": [
"Output Connectors"
],
"summary": "Get Output Status",
"description": "Retrieve the status of an output connector.",
"operationId": "get_pipeline_output_connector_status",
"parameters": [
{
"name": "pipeline_name",
"in": "path",
"description": "Unique pipeline name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "view_name",
"in": "path",
"description": "SQL view name",
"required": true,
"schema": {
"type": "string"
}
},
{
"name": "connector_name",
"in": "path",
"description": "Output connector name",
"required": true,
"schema": {
"type": "string"
}
}
],
"responses": {
"200": {
"description": "Output connector status retrieved successfully",
"content": {
"application/json": {
"schema": {
"type": "object"
}
}
}
},
"404": {
"description": "Pipeline, view and/or output connector with that name does not exist",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Pipeline with that name does not exist": {
"value": {
"message": "Unknown pipeline name 'non-existent-pipeline'",
"error_code": "UnknownPipelineName",
"details": {
"pipeline_name": "non-existent-pipeline"
}
}
}
}
}
}
},
"500": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
}
}
}
},
"503": {
"description": "",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/ErrorResponse"
},
"examples": {
"Disconnected during response": {
"value": {
"message": "Error sending HTTP request to pipeline: the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs. Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "the pipeline disconnected while it was processing this HTTP request. This could be because the pipeline either (a) encountered a fatal error or panic, (b) was stopped, or (c) experienced network issues -- retrying might help in the last case. Alternatively, check the pipeline logs."
}
}
},
"Pipeline is currently unavailable": {
"value": {
"message": "Error sending HTTP request to pipeline: deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "deployment status is currently 'unavailable' -- wait for it to become 'running' or 'paused' again"
}
}
},
"Pipeline is not deployed": {
"value": {
"message": "Unable to interact with pipeline because the deployment status (stopped) indicates it is not (yet) fully provisioned pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionNotDeployed",
"details": {
"pipeline_name": "my_pipeline",
"status": "Stopped",
"desired_status": "Provisioned"
}
}
},
"Response timeout": {
"value": {
"message": "Error sending HTTP request to pipeline: timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response) Failed request: /pause pipeline-id=N/A pipeline-name=\"my_pipeline\"",
"error_code": "PipelineInteractionUnreachable",
"details": {
"pipeline_name": "my_pipeline",
"request": "/pause",
"error": "timeout (10s) was reached: this means the pipeline took too long to respond -- this can simply be because the request was too difficult to process in time, or other reasons (e.g., deadlock): the pipeline logs might contain additional information (original send request error: Timeout while waiting for response)"
}
}
}
}
}
}
}
},
"security": [
{
"JSON web token (JWT) or API key": []
}
]
}
}
},
"components": {
"schemas": {
"AdHocInputConfig": {
"type": "object",
"description": "Configuration for inserting data with ad-hoc queries\n\nAn ad-hoc input adapters cannot be usefully configured as part of pipeline\nconfiguration. Instead, use ad-hoc queries through the UI, the REST API, or\nthe `fda` command-line tool.",
"required": [
"name"
],
"properties": {
"name": {
"type": "string",
"description": "Autogenerated name."
}
}
},
"AdHocResultFormat": {
"type": "string",
"description": "URL-encoded `format` argument to the `/query` endpoint.",
"enum": [
"text",
"json",
"parquet",
"arrow_ipc",
"hash"
]
},
"AdhocQueryArgs": {
"type": "object",
"description": "Arguments to the `/query` endpoint.\n\nThe arguments can be provided in two ways:\n\n- In case a normal HTTP connection is established to the endpoint,\nthese arguments are passed as URL-encoded parameters.\nNote: this mode is deprecated and will be removed in the future.\n\n- If a Websocket connection is opened to `/query`, the arguments are passed\nto the server over the websocket as a JSON encoded string.",
"properties": {
"format": {
"$ref": "#/components/schemas/AdHocResultFormat"
},
"sql": {
"type": "string",
"description": "The SQL query to run."
}
}
},
"ApiKeyDescr": {
"type": "object",
"description": "API key descriptor.",
"required": [
"id",
"name",
"scopes"
],
"properties": {
"id": {
"$ref": "#/components/schemas/ApiKeyId"
},
"name": {
"type": "string"
},
"scopes": {
"type": "array",
"items": {
"$ref": "#/components/schemas/ApiPermission"
}
}
}
},
"ApiKeyId": {
"type": "string",
"format": "uuid",
"description": "API key identifier."
},
"ApiPermission": {
"type": "string",
"description": "Permission types for invoking API endpoints.",
"enum": [
"Read",
"Write"
]
},
"Auth": {
"type": "object",
"properties": {
"credentials": {
"allOf": [
{
"$ref": "#/components/schemas/Credentials"
}
],
"nullable": true
},
"jwt": {
"type": "string",
"nullable": true
},
"nkey": {
"type": "string",
"nullable": true
},
"token": {
"type": "string",
"nullable": true
},
"user_and_password": {
"allOf": [
{
"$ref": "#/components/schemas/UserAndPassword"
}
],
"nullable": true
}
}
},
"AuthProvider": {
"oneOf": [
{
"type": "object",
"required": [
"AwsCognito"
],
"properties": {
"AwsCognito": {
"$ref": "#/components/schemas/ProviderAwsCognito"
}
}
},
{
"type": "object",
"required": [
"GenericOidc"
],
"properties": {
"GenericOidc": {
"$ref": "#/components/schemas/ProviderGenericOidc"
}
}
}
]
},
"BootstrapPolicy": {
"type": "string",
"enum": [
"allow",
"reject",
"await_approval"
]
},
"BuildInformation": {
"type": "object",
"description": "Information about the build of the platform.",
"required": [
"build_timestamp",
"build_cpu",
"build_os",
"cargo_dependencies",
"cargo_features",
"cargo_debug",
"cargo_opt_level",
"cargo_target_triple",
"rustc_version"
],
"properties": {
"build_cpu": {
"type": "string",
"description": "CPU of build machine."
},
"build_os": {
"type": "string",
"description": "OS of build machine."
},
"build_timestamp": {
"type": "string",
"description": "Timestamp of the build."
},
"cargo_debug": {
"type": "string",
"description": "Whether the build is optimized for performance."
},
"cargo_dependencies": {
"type": "string",
"description": "Dependencies used during the build."
},
"cargo_features": {
"type": "string",
"description": "Features enabled during the build."
},
"cargo_opt_level": {
"type": "string",
"description": "Optimization level of the build."
},
"cargo_target_triple": {
"type": "string",
"description": "Target triple of the build."
},
"rustc_version": {
"type": "string",
"description": "Rust version of the build used."
}
}
},
"CalciteId": {
"oneOf": [
{
"type": "object",
"required": [
"partial"
],
"properties": {
"partial": {
"type": "integer",
"minimum": 0
}
}
},
{
"type": "object",
"required": [
"final"
],
"properties": {
"final": {
"type": "integer",
"minimum": 0
}
}
},
{
"type": "object",
"required": [
"and"
],
"properties": {
"and": {
"type": "array",
"items": {
"$ref": "#/components/schemas/CalciteId"
}
}
}
},
{
"type": "object",
"required": [
"seq"
],
"properties": {
"seq": {
"type": "array",
"items": {
"$ref": "#/components/schemas/CalciteId"
}
}
}
},
{
"type": "object",
"default": null,
"nullable": true
}
]
},
"CalcitePlan": {
"type": "object",
"description": "The Calcite plan representation of a dataflow graph.",
"required": [
"rels"
],
"properties": {
"rels": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Rel"
}
}
}
},
"CheckpointFailure": {
"type": "object",
"description": "Information about a failed checkpoint.",
"required": [
"sequence_number",
"error"
],
"properties": {
"error": {
"type": "string",
"description": "Error message associated with the failure."
},
"sequence_number": {
"type": "integer",
"format": "int64",
"description": "Sequence number of the failed checkpoint.",
"minimum": 0
}
}
},
"CheckpointMetadata": {
"type": "object",
"description": "Holds meta-data about a checkpoint that was taken for persistent storage\nand recovery of a circuit's state.",
"required": [
"uuid",
"fingerprint"
],
"properties": {
"fingerprint": {
"type": "integer",
"format": "int64",
"description": "Fingerprint of the circuit at the time of the checkpoint.",
"minimum": 0
},
"identifier": {
"type": "string",
"description": "An optional name for the checkpoint.",
"nullable": true
},
"processed_records": {
"type": "integer",
"format": "int64",
"description": "Total number of records processed.",
"nullable": true,
"minimum": 0
},
"size": {
"type": "integer",
"format": "int64",
"description": "Total size of the checkpoint files in bytes.",
"nullable": true,
"minimum": 0
},
"steps": {
"type": "integer",
"format": "int64",
"description": "Total number of steps made.",
"nullable": true,
"minimum": 0
},
"uuid": {
"type": "string",
"format": "uuid",
"description": "A unique identifier for the given checkpoint.\n\nThis is used to identify the checkpoint in the file-system hierarchy."
}
}
},
"CheckpointResponse": {
"type": "object",
"description": "Response to a checkpoint request.",
"required": [
"checkpoint_sequence_number"
],
"properties": {
"checkpoint_sequence_number": {
"type": "integer",
"format": "int64",
"minimum": 0
}
}
},
"CheckpointStatus": {
"type": "object",
"description": "Checkpoint status returned by the `/checkpoint_status` endpoint.",
"properties": {
"failure": {
"allOf": [
{
"$ref": "#/components/schemas/CheckpointFailure"
}
],
"nullable": true
},
"success": {
"type": "integer",
"format": "int64",
"description": "Most recently successful checkpoint.",
"nullable": true,
"minimum": 0
}
}
},
"Chunk": {
"type": "object",
"description": "A set of updates to a SQL table or view.\n\nThe `sequence_number` field stores the offset of the chunk relative to the\nstart of the stream and can be used to implement reliable delivery.\nThe payload is stored in the `bin_data`, `text_data`, or `json_data` field\ndepending on the data format used.",
"required": [
"sequence_number"
],
"properties": {
"bin_data": {
"type": "string",
"format": "binary",
"description": "Base64 encoded binary payload, e.g., bincode.",
"nullable": true
},
"json_data": {
"type": "object",
"description": "JSON payload.",
"nullable": true
},
"sequence_number": {
"type": "integer",
"format": "int64",
"minimum": 0
},
"text_data": {
"type": "string",
"description": "Text payload, e.g., CSV.",
"nullable": true
}
}
},
"ClockConfig": {
"type": "object",
"required": [
"clock_resolution_usecs"
],
"properties": {
"clock_resolution_usecs": {
"type": "integer",
"format": "int64",
"minimum": 0
}
}
},
"ClusterMonitorEvent": {
"type": "object",
"description": "Brief cluster monitoring event with only the identifier, timestamp and health conclusions.",
"required": [
"id",
"recorded_at",
"api_status",
"compiler_status",
"runner_status"
],
"properties": {
"api_status": {
"$ref": "#/components/schemas/MonitorStatus"
},
"compiler_status": {
"$ref": "#/components/schemas/MonitorStatus"
},
"id": {
"$ref": "#/components/schemas/ClusterMonitorEventId"
},
"recorded_at": {
"type": "string",
"format": "date-time",
"description": "Timestamp at which the event was recorded in the database. Because collecting the data for\nthe health checks can take time, this timestamp is approximate."
},
"runner_status": {
"$ref": "#/components/schemas/MonitorStatus"
}
}
},
"ClusterMonitorEventFieldSelector": {
"type": "string",
"enum": [
"all",
"status"
]
},
"ClusterMonitorEventId": {
"type": "string",
"format": "uuid",
"description": "Cluster monitor event identifier."
},
"ClusterMonitorEventSelectedInfo": {
"type": "object",
"description": "Cluster monitor event information which has a selected subset of optional fields.\nIf an optional field is not selected (i.e., is `None`), it will not be serialized.",
"required": [
"id",
"recorded_at",
"all_healthy",
"api_status",
"compiler_status",
"runner_status"
],
"properties": {
"all_healthy": {
"type": "boolean"
},
"api_resources_info": {
"type": "string",
"nullable": true
},
"api_self_info": {
"type": "string",
"nullable": true
},
"api_status": {
"$ref": "#/components/schemas/MonitorStatus"
},
"compiler_resources_info": {
"type": "string",
"nullable": true
},
"compiler_self_info": {
"type": "string",
"nullable": true
},
"compiler_status": {
"$ref": "#/components/schemas/MonitorStatus"
},
"id": {
"$ref": "#/components/schemas/ClusterMonitorEventId"
},
"recorded_at": {
"type": "string",
"format": "date-time"
},
"runner_resources_info": {
"type": "string",
"nullable": true
},
"runner_self_info": {
"type": "string",
"nullable": true
},
"runner_status": {
"$ref": "#/components/schemas/MonitorStatus"
}
}
},
"ColumnType": {
"type": "object",
"description": "A SQL column type description.\n\nMatches the Calcite JSON format.",
"required": [
"nullable"
],
"properties": {
"component": {
"allOf": [
{
"$ref": "#/components/schemas/ColumnType"
}
],
"nullable": true
},
"fields": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Field"
},
"description": "The fields of the type (if available).\n\nFor example this would specify the fields of a `CREATE TYPE` construct.\n\n```sql\nCREATE TYPE person_typ AS (\nfirstname VARCHAR(30),\nlastname VARCHAR(30),\naddress ADDRESS_TYP\n);\n```\n\nWould lead to the following `fields` value:\n\n```sql\n[\nColumnType { name: \"firstname, ... },\nColumnType { name: \"lastname\", ... },\nColumnType { name: \"address\", fields: [ ... ] }\n]\n```",
"nullable": true
},
"key": {
"allOf": [
{
"$ref": "#/components/schemas/ColumnType"
}
],
"nullable": true
},
"nullable": {
"type": "boolean",
"description": "Does the type accept NULL values?"
},
"precision": {
"type": "integer",
"format": "int64",
"description": "Precision of the type.\n\n# Examples\n- `VARCHAR` sets precision to `-1`.\n- `VARCHAR(255)` sets precision to `255`.\n- `BIGINT`, `DATE`, `FLOAT`, `DOUBLE`, `GEOMETRY`, etc. sets precision\nto None\n- `TIME`, `TIMESTAMP` set precision to `0`.",
"nullable": true
},
"scale": {
"type": "integer",
"format": "int64",
"description": "The scale of the type.\n\n# Example\n- `DECIMAL(1,2)` sets scale to `2`.",
"nullable": true
},
"type": {
"$ref": "#/components/schemas/SqlType"
},
"value": {
"allOf": [
{
"$ref": "#/components/schemas/ColumnType"
}
],
"nullable": true
}
}
},
"CombinedDesiredStatus": {
"type": "string",
"enum": [
"Stopped",
"Unavailable",
"Standby",
"Paused",
"Running",
"Suspended"
]
},
"CombinedStatus": {
"type": "string",
"enum": [
"Stopped",
"Provisioning",
"Unavailable",
"Coordination",
"Standby",
"AwaitingApproval",
"Initializing",
"Bootstrapping",
"Replaying",
"Paused",
"Running",
"Suspended",
"Stopping"
]
},
"CompilationProfile": {
"type": "string",
"description": "Enumeration of possible compilation profiles that can be passed to the Rust compiler\nas an argument via `cargo build --profile <>`. A compilation profile affects among\nother things the compilation speed (how long till the program is ready to be run)\nand runtime speed (the performance while running).",
"enum": [
"dev",
"unoptimized",
"optimized",
"optimized_symbols"
]
},
"CompletedWatermark": {
"type": "object",
"description": "A watermark that has been fully processed by the pipeline.",
"required": [
"metadata",
"ingested_at",
"processed_at",
"completed_at"
],
"properties": {
"completed_at": {
"type": "string",
"description": "Timestamp when all outputs produced from this input have been pushed to all output endpoints."
},
"ingested_at": {
"type": "string",
"description": "Timestamp when the data was ingested from the wire."
},
"metadata": {
"type": "object",
"description": "Metadata that describes the position in the input stream (e.g., Kafka partition/offset pairs)."
},
"processed_at": {
"type": "string",
"description": "Timestamp when the data was processed by the circuit."
}
}
},
"CompletionStatus": {
"type": "string",
"description": "Completion token status returned by the `/completion_status` endpoint.",
"enum": [
"complete",
"inprogress"
]
},
"CompletionStatusArgs": {
"type": "object",
"description": "URL-encoded arguments to the `/completion_status` endpoint.",
"required": [
"token"
],
"properties": {
"token": {
"type": "string",
"description": "Completion token returned by the `/completion_token` or `/ingress`\nendpoint."
}
}
},
"CompletionStatusResponse": {
"type": "object",
"description": "Response to a completion token status request.",
"required": [
"status"
],
"properties": {
"status": {
"$ref": "#/components/schemas/CompletionStatus"
}
}
},
"CompletionTokenResponse": {
"type": "object",
"description": "Response to a completion token creation request.",
"required": [
"token"
],
"properties": {
"token": {
"type": "string",
"description": "Completion token.\n\nAn opaque string associated with the current position in the input stream\ngenerated by an input connector.\nPass this string to the `/completion_status` endpoint to check whether all\ninputs associated with the token have been fully processed by the pipeline."
}
}
},
"Condition": {
"type": "object",
"properties": {
"literal": {
"type": "boolean"
},
"op": {
"allOf": [
{
"$ref": "#/components/schemas/Op"
}
],
"nullable": true
},
"operands": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Operand"
},
"nullable": true
}
},
"additionalProperties": {}
},
"Configuration": {
"type": "object",
"required": [
"telemetry",
"edition",
"version",
"revision",
"runtime_revision",
"changelog_url",
"build_info",
"build_source"
],
"properties": {
"build_info": {
"$ref": "#/components/schemas/BuildInformation"
},
"build_source": {
"type": "string",
"description": "Build source: \"ci\" for GitHub Actions builds, \"source\" for local builds"
},
"changelog_url": {
"type": "string",
"description": "URL that navigates to the changelog of the current version"
},
"edition": {
"type": "string",
"description": "Feldera edition: \"Open source\" or \"Enterprise\""
},
"license_validity": {
"allOf": [
{
"$ref": "#/components/schemas/LicenseValidity"
}
],
"nullable": true
},
"revision": {
"type": "string",
"description": "Specific revision corresponding to the edition `version` (e.g., git commit hash)."
},
"runtime_revision": {
"type": "string",
"description": "Specific revision corresponding to the default runtime version of the platform (e.g., git commit hash)."
},
"telemetry": {
"type": "string",
"description": "Telemetry key."
},
"unstable_features": {
"type": "string",
"description": "List of unstable features that are enabled.",
"nullable": true
},
"update_info": {
"allOf": [
{
"$ref": "#/components/schemas/UpdateInformation"
}
],
"nullable": true
},
"version": {
"type": "string",
"description": "The version corresponding to the type of `edition`.\nFormat is `x.y.z`."
}
}
},
"ConnectOptions": {
"type": "object",
"description": "Options for connecting to a NATS server.",
"required": [
"server_url"
],
"properties": {
"auth": {
"$ref": "#/components/schemas/Auth"
},
"connection_timeout_secs": {
"type": "integer",
"format": "int64",
"description": "Connection timeout\n\nHow long to wait when establishing the initial connection to the\nNATS server.",
"minimum": 0
},
"request_timeout_secs": {
"type": "integer",
"format": "int64",
"description": "Request timeout in seconds.\n\nHow long to wait for responses to requests.",
"minimum": 0
},
"server_url": {
"type": "string",
"description": "NATS server URL (e.g., \"nats://localhost:4222\")."
}
}
},
"ConnectorConfig": {
"allOf": [
{
"$ref": "#/components/schemas/OutputBufferConfig"
},
{
"type": "object",
"required": [
"transport"
],
"properties": {
"format": {
"allOf": [
{
"$ref": "#/components/schemas/FormatConfig"
}
],
"nullable": true
},
"index": {
"type": "string",
"description": "Name of the index that the connector is attached to.\n\nThis property is valid for output connectors only. It is used with data\ntransports and formats that expect output updates in the form of key/value\npairs, where the key typically represents a unique id associated with the\ntable or view.\n\nTo support such output formats, an output connector can be attached to an\nindex created using the SQL CREATE INDEX statement. An index of a table\nor view contains the same updates as the table or view itself, indexed by\none or more key columns.\n\nSee individual connector documentation for details on how they work\nwith indexes.",
"nullable": true
},
"labels": {
"type": "array",
"items": {
"type": "string"
},
"description": "Arbitrary user-defined text labels associated with the connector.\n\nThese labels can be used in conjunction with the `start_after` property\nto control the start order of connectors."
},
"max_batch_size": {
"type": "integer",
"format": "int64",
"description": "Maximum batch size, in records.\n\nThis is the maximum number of records to process in one batch through\nthe circuit. The time and space cost of processing a batch is\nasymptotically superlinear in the size of the batch, but very small\nbatches are less efficient due to constant factors.\n\nThis should usually be less than `max_queued_records`, to give the\nconnector a round-trip time to restart and refill the buffer while\nbatches are being processed.\n\nSome input adapters might not honor this setting.\n\nThe default is 10,000.",
"minimum": 0
},
"max_queued_records": {
"type": "integer",
"format": "int64",
"description": "Backpressure threshold.\n\nMaximal number of records queued by the endpoint before the endpoint\nis paused by the backpressure mechanism.\n\nFor input endpoints, this setting bounds the number of records that have\nbeen received from the input transport but haven't yet been consumed by\nthe circuit since the circuit, since the circuit is still busy processing\nprevious inputs.\n\nFor output endpoints, this setting bounds the number of records that have\nbeen produced by the circuit but not yet sent via the output transport endpoint\nnor stored in the output buffer (see `enable_output_buffer`).\n\nNote that this is not a hard bound: there can be a small delay between\nthe backpressure mechanism is triggered and the endpoint is paused, during\nwhich more data may be queued.\n\nThe default is 1 million.",
"minimum": 0
},
"paused": {
"type": "boolean",
"description": "Create connector in paused state.\n\nThe default is `false`."
},
"start_after": {
"type": "array",
"items": {
"type": "string"
},
"description": "Start the connector after all connectors with specified labels.\n\nThis property is used to control the start order of connectors.\nThe connector will not start until all connectors with the specified\nlabels have finished processing all inputs.",
"nullable": true
},
"transport": {
"$ref": "#/components/schemas/TransportConfig"
}
}
}
],
"description": "A data connector's configuration"
},
"ConnectorStats": {
"type": "object",
"description": "Aggregated connector error statistics.\n\nThis structure contains the sum of all error counts across all input and output connectors\nfor a pipeline.",
"required": [
"num_errors"
],
"properties": {
"num_errors": {
"type": "integer",
"format": "int64",
"description": "Total number of errors across all connectors.\n\nThis is the sum of:\n- `num_transport_errors` from all input connectors\n- `num_parse_errors` from all input connectors\n- `num_encode_errors` from all output connectors\n- `num_transport_errors` from all output connectors",
"minimum": 0
}
}
},
"ConnectorTransactionPhase": {
"type": "object",
"description": "Connector transaction phase with debugging label.",
"required": [
"phase"
],
"properties": {
"label": {
"type": "string",
"description": "Optional label for debugging.",
"nullable": true
},
"phase": {
"$ref": "#/components/schemas/TransactionPhase"
}
}
},
"ConsumerConfig": {
"type": "object",
"required": [
"deliver_policy"
],
"properties": {
"deliver_policy": {
"$ref": "#/components/schemas/DeliverPolicy"
},
"description": {
"type": "string",
"nullable": true
},
"filter_subjects": {
"type": "array",
"items": {
"type": "string"
}
},
"max_batch": {
"type": "integer",
"format": "int64",
"nullable": true
},
"max_bytes": {
"type": "integer",
"format": "int64",
"nullable": true
},
"max_expires": {
"type": "string",
"nullable": true
},
"max_waiting": {
"type": "integer",
"format": "int64"
},
"metadata": {
"type": "object",
"additionalProperties": {
"type": "string"
}
},
"name": {
"type": "string",
"nullable": true
},
"rate_limit": {
"type": "integer",
"format": "int64",
"minimum": 0
},
"replay_policy": {
"$ref": "#/components/schemas/ReplayPolicy"
}
}
},
"ControllerStatus": {
"type": "object",
"description": "Complete pipeline statistics returned by the `/stats` endpoint.\n\nThis schema definition matches the serialized JSON structure from\n`adapters::controller::ControllerStatus`. The actual implementation with\natomics and mutexes lives in the adapters crate, which uses ExternalControllerStatus to\nregister this OpenAPI schema, making it available to pipeline-manager\nwithout requiring a direct dependency on the adapters crate.",
"required": [
"global_metrics",
"inputs",
"outputs"
],
"properties": {
"global_metrics": {
"$ref": "#/components/schemas/GlobalControllerMetrics"
},
"inputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/InputEndpointStatus"
},
"description": "Input endpoint configs and metrics."
},
"outputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/OutputEndpointStatus"
},
"description": "Output endpoint configs and metrics."
},
"suspend_error": {
"allOf": [
{
"$ref": "#/components/schemas/SuspendError"
}
],
"nullable": true
}
}
},
"Credentials": {
"oneOf": [
{
"type": "object",
"required": [
"FromString"
],
"properties": {
"FromString": {
"type": "string"
}
}
},
{
"type": "object",
"required": [
"FromFile"
],
"properties": {
"FromFile": {
"type": "string"
}
},
"example": "/path/to/credentials.json"
}
]
},
"Dataflow": {
"type": "object",
"description": "The JSON representation of a dataflow graph.",
"required": [
"calcite_plan",
"mir"
],
"properties": {
"calcite_plan": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/CalcitePlan"
}
},
"mir": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/MirNode"
}
}
}
},
"DatagenInputConfig": {
"type": "object",
"description": "Configuration for generating random data for a table.",
"properties": {
"plan": {
"type": "array",
"items": {
"$ref": "#/components/schemas/GenerationPlan"
},
"description": "The sequence of generations to perform.\n\nIf not set, the generator will produce a single sequence with default settings.\nIf set, the generator will produce the specified sequences in sequential order.\n\nNote that if one of the sequences before the last one generates an unlimited number of rows\nthe following sequences will not be executed.",
"default": [
{
"rate": null,
"limit": null,
"worker_chunk_size": null,
"fields": {}
}
]
},
"seed": {
"type": "integer",
"format": "int64",
"description": "Optional seed for the random generator.\n\nSetting this to a fixed value will make the generator produce the same sequence of records\nevery time the pipeline is run.\n\n# Notes\n- To ensure the set of generated input records is deterministic across multiple runs,\napart from setting a seed, `workers` also needs to remain unchanged.\n- The input will arrive in non-deterministic order if `workers > 1`.",
"default": null,
"nullable": true,
"minimum": 0
},
"workers": {
"type": "integer",
"description": "Number of workers to use for generating data.",
"default": 1,
"minimum": 0
}
},
"additionalProperties": false
},
"DatagenStrategy": {
"type": "string",
"description": "Strategy used to generate values.",
"enum": [
"increment",
"uniform",
"zipf",
"word",
"words",
"sentence",
"sentences",
"paragraph",
"paragraphs",
"first_name",
"last_name",
"title",
"suffix",
"name",
"name_with_title",
"domain_suffix",
"email",
"username",
"password",
"field",
"position",
"seniority",
"job_title",
"ipv4",
"ipv6",
"ip",
"mac_address",
"user_agent",
"rfc_status_code",
"valid_status_code",
"company_suffix",
"company_name",
"buzzword",
"buzzword_middle",
"buzzword_tail",
"catch_phrase",
"bs_verb",
"bs_adj",
"bs_noun",
"bs",
"profession",
"industry",
"currency_code",
"currency_name",
"currency_symbol",
"credit_card_number",
"city_prefix",
"city_suffix",
"city_name",
"country_name",
"country_code",
"street_suffix",
"street_name",
"time_zone",
"state_name",
"state_abbr",
"secondary_address_type",
"secondary_address",
"zip_code",
"post_code",
"building_number",
"latitude",
"longitude",
"isbn",
"isbn13",
"isbn10",
"phone_number",
"cell_number",
"file_path",
"file_name",
"file_extension",
"dir_path"
]
},
"DeliverPolicy": {
"oneOf": [
{
"type": "string",
"enum": [
"All"
]
},
{
"type": "string",
"enum": [
"Last"
]
},
{
"type": "string",
"enum": [
"New"
]
},
{
"type": "object",
"required": [
"ByStartSequence"
],
"properties": {
"ByStartSequence": {
"type": "object",
"required": [
"start_sequence"
],
"properties": {
"start_sequence": {
"type": "integer",
"format": "int64",
"minimum": 0
}
}
}
}
},
{
"type": "object",
"required": [
"ByStartTime"
],
"properties": {
"ByStartTime": {
"type": "object",
"required": [
"start_time"
],
"properties": {
"start_time": {
"type": "string",
"format": "date-time",
"example": "2023-01-15T09:30:00Z"
}
}
}
}
},
{
"type": "string",
"enum": [
"LastPerSubject"
]
}
]
},
"DeltaTableIngestMode": {
"type": "string",
"description": "Delta table read mode.\n\nThree options are available:\n\n* `snapshot` - read a snapshot of the table and stop.\n\n* `follow` - continuously ingest changes to the table, starting from a specified version\nor timestamp.\n\n* `snapshot_and_follow` - read a snapshot of the table before switching to continuous ingestion\nmode.",
"enum": [
"snapshot",
"follow",
"snapshot_and_follow",
"cdc"
]
},
"DeltaTableReaderConfig": {
"type": "object",
"description": "Delta table input connector configuration.",
"required": [
"uri",
"mode"
],
"properties": {
"cdc_delete_filter": {
"type": "string",
"description": "A predicate that determines whether the record represents a deletion.\n\nThis setting is only valid in the `cdc` mode. It specifies a predicate applied to\neach row in the Delta table to determine whether the row represents a deletion event.\nIts value must be a valid Boolean SQL expression that can be used in a query of the\nform `SELECT * from <table> WHERE <cdc_delete_filter>`.",
"nullable": true
},
"cdc_order_by": {
"type": "string",
"description": "An expression that determines the ordering of updates in the Delta table.\n\nThis setting is only valid in the `cdc` mode. It specifies a predicate applied to\neach row in the Delta table to determine the order in which updates in the table should\nbe applied. Its value must be a valid SQL expression that can be used in a query of the\nform `SELECT * from <table> ORDER BY <cdc_order_by>`.",
"nullable": true
},
"datetime": {
"type": "string",
"description": "Optional timestamp for the snapshot in the ISO-8601/RFC-3339 format, e.g.,\n\"2024-12-09T16:09:53+00:00\".\n\nWhen this option is set, the connector finds and opens the version of the table as of the\nspecified point in time (based on the server time recorded in the transaction log, not the\nevent time encoded in the data). In `snapshot` and `snapshot_and_follow` modes, it\nretrieves the snapshot of this version of the table. In `follow`, `snapshot_and_follow`, and\n`cdc` modes, it follows transaction log records **after** this version.\n\nNote: at most one of `version` and `datetime` options can be specified.\nWhen neither of the two options is specified, the latest committed version of the table\nis used.",
"nullable": true
},
"end_version": {
"type": "integer",
"format": "int64",
"description": "Optional final table version.\n\nValid only when the connector is configured in `follow`, `snapshot_and_follow`, or `cdc` mode.\n\nWhen set, the connector will stop scanning the table’s transaction log after reaching this version or any greater version.\nThis bound is inclusive: if the specified version appears in the log, it will be processed before signaling end-of-input.",
"nullable": true
},
"filter": {
"type": "string",
"description": "Optional row filter.\n\nWhen specified, only rows that satisfy the filter condition are read from the delta table.\nThe condition must be a valid SQL Boolean expression that can be used in\nthe `where` clause of the `select * from my_table where ...` query.",
"nullable": true
},
"max_concurrent_readers": {
"type": "integer",
"format": "int32",
"description": "Maximum number of concurrent object store reads performed by all Delta Lake connectors.\n\nThis setting is used to limit the number of concurrent reads of the object store in a\npipeline with a large number of Delta Lake connectors. When multiple connectors are simultaneously\nreading from the object store, this can lead to transport timeouts.\n\nWhen enabled, this setting limits the number of concurrent reads across all connectors.\nThis is a global setting that affects all Delta Lake connectors, and not just the connector\nwhere it is specified. It should therefore be used at most once in a pipeline. If multiple\nconnectors specify this setting, they must all use the same value.\n\nThe default value is 6.",
"nullable": true,
"minimum": 0
},
"mode": {
"$ref": "#/components/schemas/DeltaTableIngestMode"
},
"num_parsers": {
"type": "integer",
"format": "int32",
"description": "The number of parallel parsing tasks the connector uses to process data read from the\ntable. Increasing this value can enhance performance by allowing more concurrent processing.\nRecommended range: 1–10. The default is 4.",
"minimum": 0
},
"skip_unused_columns": {
"type": "boolean",
"description": "Don't read unused columns from the Delta table.\n\nWhen set to `true`, this option instructs the connector to avoid reading\ncolumns from the Delta table that are not used in any view definitions.\nTo be skipped, the columns must be either nullable or have default\nvalues. This can improve ingestion performance, especially for wide\ntables.\n\nNote: The simplest way to exclude unused columns is to omit them from the Feldera SQL table\ndeclaration. The connector never reads columns that aren't declared in the SQL schema.\nAdditionally, the SQL compiler emits warnings for declared but unused columns—use these as\na guide to optimize your schema."
},
"snapshot_filter": {
"type": "string",
"description": "Optional snapshot filter.\n\nThis option is only valid when `mode` is set to `snapshot` or `snapshot_and_follow`.\n\nWhen specified, only rows that satisfy the filter condition are included in the\nsnapshot. The condition must be a valid SQL Boolean expression that can be used in\nthe `where` clause of the `select * from snapshot where ...` query.\n\nUnlike the `filter` option, which applies to all records retrieved from the table, this\nfilter only applies to rows in the initial snapshot of the table.\nFor instance, it can be used to specify the range of event times to include in the snapshot,\ne.g.: `ts BETWEEN TIMESTAMP '2005-01-01 00:00:00' AND TIMESTAMP '2010-12-31 23:59:59'`.\n\nThis option can be used together with the `filter` option. During the initial snapshot,\nonly rows that satisfy both `filter` and `snapshot_filter` are retrieved from the Delta table.\nWhen subsequently following changes in the the transaction log (`mode = snapshot_and_follow`),\nall rows that meet the `filter` condition are ingested, regardless of `snapshot_filter`.",
"nullable": true
},
"timestamp_column": {
"type": "string",
"description": "Table column that serves as an event timestamp.\n\nWhen this option is specified, and `mode` is one of `snapshot` or `snapshot_and_follow`,\ntable rows are ingested in the timestamp order, respecting the\n[`LATENESS`](https://docs.feldera.com/sql/streaming#lateness-expressions)\nproperty of the column: each ingested row has a timestamp no more than `LATENESS`\ntime units earlier than the most recent timestamp of any previously ingested row.\nThe ingestion is performed by partitioning the table into timestamp ranges of width\n`LATENESS`. Each range is processed sequentially, in increasing timestamp order.\n\n# Example\n\nConsider a table with timestamp column of type `TIMESTAMP` and lateness attribute\n`INTERVAL 1 DAY`. Assuming that the oldest timestamp in the table is\n`2024-01-01T00:00:00``, the connector will fetch all records with timestamps\nfrom `2024-01-01`, then all records for `2024-01-02`, `2024-01-03`, etc., until all records\nin the table have been ingested.\n\n# Requirements\n\n* The timestamp column must be of a supported type: integer, `DATE`, or `TIMESTAMP`.\n* The timestamp column must be declared with non-zero `LATENESS`.\n* For efficient ingest, the table must be optimized for timestamp-based\nqueries using partitioning, Z-ordering, or liquid clustering.",
"nullable": true
},
"transaction_mode": {
"$ref": "#/components/schemas/DeltaTableTransactionMode"
},
"uri": {
"type": "string",
"description": "Table URI.\n\nExample: \"s3://feldera-fraud-detection-data/demographics_train\""
},
"verbose": {
"type": "integer",
"format": "int32",
"description": "Enable verbose logging.\n\nWhen enabled, the connector will log detailed information at INFO level.\n\nSupported values:\n* 0 - no verbose logging\n* 1 - log all Delta log entries in follow and cdc modes.\n* >1 - reserved for future use",
"minimum": 0
},
"version": {
"type": "integer",
"format": "int64",
"description": "Optional table version.\n\nWhen this option is set, the connector finds and opens the specified version of the table.\nIn `snapshot` and `snapshot_and_follow` modes, it retrieves the snapshot of this version of\nthe table. In `follow`, `snapshot_and_follow`, and `cdc` modes, it follows transaction log records\n**after** this version.\n\nNote: at most one of `version` and `datetime` options can be specified.\nWhen neither of the two options is specified, the latest committed version of the table\nis used.",
"nullable": true
}
},
"additionalProperties": {
"type": "string",
"description": "Storage options for configuring backend object store.\n\nFor specific options available for different storage backends, see:\n* [Azure options](https://docs.rs/object_store/latest/object_store/azure/enum.AzureConfigKey.html)\n* [Amazon S3 options](https://docs.rs/object_store/latest/object_store/aws/enum.AmazonS3ConfigKey.html)\n* [Google Cloud Storage options](https://docs.rs/object_store/latest/object_store/gcp/enum.GoogleConfigKey.html)"
}
},
"DeltaTableTransactionMode": {
"type": "string",
"description": "Delta table transaction mode.\n\nDetermines how the connector breaks up its input into transactions.\n\n* `none` - the connector does not break up its input into transactions.\n* `snapshot` - ingest the initial snapshot of the table in one or several transactions. If the connector is\nconfigured in the `snapshot_and_follow` mode, it will ingest the transaction log of the table without transactions.\n* `always` - all updates generated by the connector are broken up into transactions, both during the initial snapshot\nand when following the transaction log.\n\n# How table snapshot is ingested using transactions\n\nIf the connector is configured in the `snapshot` or `snapshot_and_follow` mode, and its\n`transaction_mode` is set to `snapshot` or `always`, it will ingest the snapshot of the\ntable in one or several transactions. The exact behavior depends on the value of the `timestamp_column` option.\nIf `timestamp_column` is not set, the connector will ingest the snapshot of the table in one big transaction.\n\nIf `timestamp_column` is set, the connector will ingest the snapshot of the table in a series of batches, one for\neach timestamp range of width equal to the `LATENESS` attribute of the `timestamp_column`. Each range will be\ningested in a separate transaction.\n\nSee `timestamp_column` documentation for more details.\n\n# How transaction log is ingested using transactions\n\nIf the connector is configured in the `follow`, `snapshot_and_follow`, or `cdc` mode, and its\n`transaction_mode` is set to `always`, it will ingest the transaction log of the table in a series of transactions,\ngenerating exactly one transaction for each entry in the table's transaction log.\n\nIn other words, Feldera transaction boundaries will precisely match the transaction boundaries of the Delta Lake table.",
"enum": [
"none",
"snapshot",
"always"
]
},
"DeltaTableWriteMode": {
"type": "string",
"description": "Delta table write mode.\n\nDetermines how the Delta table connector handles an existing table at the target location.",
"enum": [
"append",
"truncate",
"error_if_exists"
]
},
"DeltaTableWriterConfig": {
"type": "object",
"description": "Delta table output connector configuration.",
"required": [
"uri"
],
"properties": {
"mode": {
"$ref": "#/components/schemas/DeltaTableWriteMode"
},
"uri": {
"type": "string",
"description": "Table URI."
}
},
"additionalProperties": {
"type": "string",
"description": "Storage options for configuring backend object store.\n\nFor specific options available for different storage backends, see:\n* [Azure options](https://docs.rs/object_store/latest/object_store/azure/enum.AzureConfigKey.html)\n* [Amazon S3 options](https://docs.rs/object_store/latest/object_store/aws/enum.AmazonS3ConfigKey.html)\n* [Google Cloud Storage options](https://docs.rs/object_store/latest/object_store/gcp/enum.GoogleConfigKey.html)"
}
},
"Demo": {
"type": "object",
"required": [
"name",
"title",
"description",
"program_code",
"udf_rust",
"udf_toml"
],
"properties": {
"description": {
"type": "string",
"description": "Description of the demo (parsed from SQL preamble)."
},
"name": {
"type": "string",
"description": "Name of the demo (parsed from SQL preamble)."
},
"program_code": {
"type": "string",
"description": "Program SQL code."
},
"title": {
"type": "string",
"description": "Title of the demo (parsed from SQL preamble)."
},
"udf_rust": {
"type": "string",
"description": "User defined function (UDF) Rust code."
},
"udf_toml": {
"type": "string",
"description": "User defined function (UDF) TOML dependencies."
}
}
},
"DisplaySchedule": {
"oneOf": [
{
"type": "string",
"description": "Display it only once: after dismissal do not show it again",
"enum": [
"Once"
]
},
{
"type": "string",
"description": "Display it again the next session if it is dismissed",
"enum": [
"Session"
]
},
{
"type": "object",
"required": [
"Every"
],
"properties": {
"Every": {
"type": "object",
"description": "Display it again after a certain period of time after it is dismissed",
"required": [
"seconds"
],
"properties": {
"seconds": {
"type": "integer",
"format": "int64",
"minimum": 0
}
}
}
}
},
{
"type": "string",
"description": "Always display it, do not allow it to be dismissed",
"enum": [
"Always"
]
}
]
},
"ErrorResponse": {
"type": "object",
"description": "Information returned by REST API endpoints on error.",
"required": [
"message",
"error_code",
"details"
],
"properties": {
"details": {
"description": "Detailed error metadata.\nThe contents of this field is determined by `error_code`."
},
"error_code": {
"type": "string",
"description": "Error code is a string that specifies this error type.",
"example": "CodeSpecifyingErrorType"
},
"message": {
"type": "string",
"description": "Human-readable error message.",
"example": "Explanation of the error that occurred."
}
}
},
"ExtendedClusterMonitorEvent": {
"type": "object",
"description": "Extended cluster monitoring event with full details.",
"required": [
"id",
"recorded_at",
"api_status",
"api_self_info",
"api_resources_info",
"compiler_status",
"compiler_self_info",
"compiler_resources_info",
"runner_status",
"runner_self_info",
"runner_resources_info"
],
"properties": {
"api_resources_info": {
"type": "string",
"description": "Human-readable API server(s) status report of the resources backing the API server(s)\n-- in particular, the Kubernetes objects."
},
"api_self_info": {
"type": "string",
"description": "Human-readable API server(s) status report."
},
"api_status": {
"$ref": "#/components/schemas/MonitorStatus"
},
"compiler_resources_info": {
"type": "string",
"description": "Human-readable API server(s) status report of the resources backing the compiler server(s)\n-- in particular, the Kubernetes objects."
},
"compiler_self_info": {
"type": "string",
"description": "Human-readable compiler server(s) status report."
},
"compiler_status": {
"$ref": "#/components/schemas/MonitorStatus"
},
"id": {
"$ref": "#/components/schemas/ClusterMonitorEventId"
},
"recorded_at": {
"type": "string",
"format": "date-time",
"description": "Timestamp at which the event was recorded in the database. Because collecting the data for\nthe health checks can take time, this timestamp is approximate."
},
"runner_resources_info": {
"type": "string",
"description": "Human-readable API server(s) status report of the resources backing the runner\n-- in particular, the Kubernetes objects."
},
"runner_self_info": {
"type": "string",
"description": "Human-readable runner status report."
},
"runner_status": {
"$ref": "#/components/schemas/MonitorStatus"
}
}
},
"Field": {
"allOf": [
{
"$ref": "#/components/schemas/SqlIdentifier"
},
{
"type": "object",
"required": [
"columntype",
"unused"
],
"properties": {
"columntype": {
"$ref": "#/components/schemas/ColumnType"
},
"default": {
"type": "string",
"nullable": true
},
"lateness": {
"type": "string",
"nullable": true
},
"unused": {
"type": "boolean"
},
"watermark": {
"type": "string",
"nullable": true
}
}
}
],
"description": "A SQL field.\n\nMatches the SQL compiler JSON format."
},
"FileBackendConfig": {
"type": "object",
"description": "Configuration for local file system access.",
"properties": {
"async_threads": {
"type": "boolean",
"description": "Whether to use background threads for file I/O.\n\nBackground threads should improve performance, but they can reduce\nperformance if too few cores are available. This is provided for\ndebugging and fine-tuning and should ordinarily be left unset.",
"default": null,
"nullable": true
},
"ioop_delay": {
"type": "integer",
"format": "int64",
"description": "Per-I/O operation sleep duration, in milliseconds.\n\nThis is for simulating slow storage devices. Do not use this in\nproduction.",
"default": null,
"nullable": true,
"minimum": 0
},
"sync": {
"allOf": [
{
"$ref": "#/components/schemas/SyncConfig"
}
],
"default": null,
"nullable": true
}
}
},
"FileInputConfig": {
"type": "object",
"description": "Configuration for reading data from a file with `FileInputTransport`",
"required": [
"path"
],
"properties": {
"buffer_size_bytes": {
"type": "integer",
"description": "Read buffer size.\n\nDefault: when this parameter is not specified, a platform-specific\ndefault is used.",
"nullable": true,
"minimum": 0
},
"follow": {
"type": "boolean",
"description": "Enable file following.\n\nWhen `false`, the endpoint outputs an `InputConsumer::eoi`\nmessage and stops upon reaching the end of file. When `true`, the\nendpoint will keep watching the file and outputting any new content\nappended to it."
},
"path": {
"type": "string",
"description": "File path.\n\nThis may be a file name or a `file://` URL with an absolute path."
}
}
},
"FileOutputConfig": {
"type": "object",
"description": "Configuration for writing data to a file with `FileOutputTransport`.",
"required": [
"path"
],
"properties": {
"path": {
"type": "string",
"description": "File path."
}
}
},
"FormatConfig": {
"type": "object",
"description": "Data format specification used to parse raw data received from the\nendpoint or to encode data sent to the endpoint.",
"required": [
"name"
],
"properties": {
"config": {
"type": "object",
"description": "Format-specific parser or encoder configuration."
},
"name": {
"type": "string",
"description": "Format name, e.g., \"csv\", \"json\", \"bincode\", etc."
}
}
},
"FtConfig": {
"type": "object",
"description": "Fault-tolerance configuration.\n\nThe default [FtConfig] (via [FtConfig::default]) disables fault tolerance,\nwhich is the configuration that one gets if [RuntimeConfig] omits fault\ntolerance configuration.\n\nThe default value for [FtConfig::model] enables fault tolerance, as\n`Some(FtModel::default())`. This is the configuration that one gets if\n[RuntimeConfig] includes a fault tolerance configuration but does not\nspecify a particular model.",
"properties": {
"checkpoint_interval_secs": {
"type": "integer",
"format": "int64",
"description": "Interval between automatic checkpoints, in seconds.\n\nThe default is 60 seconds. Values less than 1 or greater than 3600 will\nbe forced into that range.",
"nullable": true,
"minimum": 0
},
"model": {
"oneOf": [
{
"$ref": "#/components/schemas/FtModel"
},
{
"type": "string",
"enum": [
"none"
]
}
],
"default": "exactly_once"
}
}
},
"FtModel": {
"type": "string",
"description": "Fault tolerance model.\n\nThe ordering is significant: we consider [Self::ExactlyOnce] to be a \"higher\nlevel\" of fault tolerance than [Self::AtLeastOnce].",
"enum": [
"at_least_once",
"exactly_once"
]
},
"GenerationPlan": {
"type": "object",
"description": "A random generation plan for a table that generates either a limited amount of rows or runs continuously.",
"properties": {
"fields": {
"type": "object",
"description": "Specifies the values that the generator should produce.",
"default": {},
"additionalProperties": {
"$ref": "#/components/schemas/RngFieldSettings"
}
},
"limit": {
"type": "integer",
"description": "Total number of new rows to generate.\n\nIf not set, the generator will produce new/unique records as long as the pipeline is running.\nIf set to 0, the table will always remain empty.\nIf set, the generator will produce new records until the specified limit is reached.\n\nNote that if the table has one or more primary keys that don't use the `increment` strategy to\ngenerate the key there is a potential that an update is generated instead of an insert. In\nthis case it's possible the total number of records is less than the specified limit.",
"default": null,
"nullable": true,
"minimum": 0
},
"rate": {
"type": "integer",
"format": "int32",
"description": "Non-zero number of rows to generate per second.\n\nIf not set, the generator will produce rows as fast as possible.",
"default": null,
"nullable": true,
"minimum": 0
},
"worker_chunk_size": {
"type": "integer",
"description": "When multiple workers are used, each worker will pick a consecutive \"chunk\" of\nrecords to generate.\n\nBy default, if not specified, the generator will use the formula `min(rate, 10_000)`\nto determine it. This works well in most situations. However, if you're\nrunning tests with lateness and many workers you can e.g., reduce the\nchunk size to make sure a smaller range of records is being ingested in parallel.\n\nThis also controls the sizes of input batches. If, for example, `rate`\nand `worker_chunk_size` are both 1000, with a single worker, the\ngenerator will output 1000 records once a second. But if we reduce\n`worker_chunk_size` to 100 without changing `rate`, the generator will\ninstead output 100 records 10 times per second.\n\n# Example\nAssume you generate a total of 125 records with 4 workers and a chunk size of 25.\nIn this case, worker A will generate records 0..25, worker B will generate records 25..50,\netc. A, B, C, and D will generate records in parallel. The first worker to finish its chunk\nwill pick up the last chunk of records (100..125) to generate.",
"default": null,
"nullable": true,
"minimum": 0
}
},
"additionalProperties": false
},
"GetClusterEventParameters": {
"type": "object",
"description": "Query parameters to GET a cluster monitor event.",
"properties": {
"selector": {
"$ref": "#/components/schemas/ClusterMonitorEventFieldSelector"
}
}
},
"GetPipelineParameters": {
"type": "object",
"description": "Query parameters to GET a pipeline or a list of pipelines.",
"properties": {
"selector": {
"$ref": "#/components/schemas/PipelineFieldSelector"
}
}
},
"GlobalControllerMetrics": {
"type": "object",
"description": "Global controller metrics.",
"required": [
"state",
"bootstrap_in_progress",
"transaction_status",
"transaction_id",
"transaction_initiators",
"rss_bytes",
"cpu_msecs",
"uptime_msecs",
"start_time",
"incarnation_uuid",
"initial_start_time",
"storage_bytes",
"storage_mb_secs",
"runtime_elapsed_msecs",
"buffered_input_records",
"buffered_input_bytes",
"total_input_records",
"total_input_bytes",
"total_processed_records",
"total_processed_bytes",
"total_completed_records",
"pipeline_complete"
],
"properties": {
"bootstrap_in_progress": {
"type": "boolean",
"description": "The pipeline has been resumed from a checkpoint and is currently bootstrapping new and modified views."
},
"buffered_input_bytes": {
"type": "integer",
"format": "int64",
"description": "Total number of bytes currently buffered by all endpoints.",
"minimum": 0
},
"buffered_input_records": {
"type": "integer",
"format": "int64",
"description": "Total number of records currently buffered by all endpoints.",
"minimum": 0
},
"cpu_msecs": {
"type": "integer",
"format": "int64",
"description": "CPU time used by the pipeline across all threads, in milliseconds.",
"minimum": 0
},
"incarnation_uuid": {
"type": "string",
"format": "uuid",
"description": "Uniquely identifies the pipeline process that started at start_time."
},
"initial_start_time": {
"type": "integer",
"format": "int64",
"description": "Time at which the pipeline process from which we resumed started, in seconds since the epoch.",
"minimum": 0
},
"pipeline_complete": {
"type": "boolean",
"description": "True if the pipeline has processed all input data to completion."
},
"rss_bytes": {
"type": "integer",
"format": "int64",
"description": "Resident set size of the pipeline process, in bytes.",
"minimum": 0
},
"runtime_elapsed_msecs": {
"type": "integer",
"format": "int64",
"description": "Time elapsed while the pipeline is executing a step, multiplied by the number of threads, in milliseconds.",
"minimum": 0
},
"start_time": {
"type": "integer",
"format": "int64",
"description": "Time at which the pipeline process started, in seconds since the epoch.",
"minimum": 0
},
"state": {
"$ref": "#/components/schemas/PipelineState"
},
"storage_bytes": {
"type": "integer",
"format": "int64",
"description": "Current storage usage in bytes.",
"minimum": 0
},
"storage_mb_secs": {
"type": "integer",
"format": "int64",
"description": "Storage usage integrated over time, in megabytes * seconds.",
"minimum": 0
},
"total_completed_records": {
"type": "integer",
"format": "int64",
"description": "Total number of input records processed to completion.",
"minimum": 0
},
"total_input_bytes": {
"type": "integer",
"format": "int64",
"description": "Total number of bytes received from all endpoints.",
"minimum": 0
},
"total_input_records": {
"type": "integer",
"format": "int64",
"description": "Total number of records received from all endpoints.",
"minimum": 0
},
"total_processed_bytes": {
"type": "integer",
"format": "int64",
"description": "Total bytes of input records processed by the DBSP engine.",
"minimum": 0
},
"total_processed_records": {
"type": "integer",
"format": "int64",
"description": "Total number of input records processed by the DBSP engine.",
"minimum": 0
},
"transaction_id": {
"type": "integer",
"format": "int64",
"description": "ID of the current transaction or 0 if no transaction is in progress."
},
"transaction_initiators": {
"$ref": "#/components/schemas/TransactionInitiators"
},
"transaction_status": {
"$ref": "#/components/schemas/TransactionStatus"
},
"uptime_msecs": {
"type": "integer",
"format": "int64",
"description": "Time since the pipeline process started, in milliseconds.",
"minimum": 0
}
}
},
"GlueCatalogConfig": {
"type": "object",
"description": "AWS Glue catalog config.",
"properties": {
"glue.access-key-id": {
"type": "string",
"description": "Access key id used to access the Glue catalog.",
"nullable": true
},
"glue.endpoint": {
"type": "string",
"description": "Configure an alternative endpoint of the Glue service for Glue catalog to access.\n\nExample: `\"https://glue.us-east-1.amazonaws.com\"`",
"nullable": true
},
"glue.id": {
"type": "string",
"description": "The 12-digit ID of the Glue catalog.",
"nullable": true
},
"glue.profile-name": {
"type": "string",
"description": "Profile used to access the Glue catalog.",
"nullable": true
},
"glue.region": {
"type": "string",
"description": "Region of the Glue catalog.",
"nullable": true
},
"glue.secret-access-key": {
"type": "string",
"description": "Secret access key used to access the Glue catalog.",
"nullable": true
},
"glue.session-token": {
"type": "string",
"nullable": true
},
"glue.warehouse": {
"type": "string",
"description": "Location for table metadata.\n\nExample: `\"s3://my-data-warehouse/tables/\"`",
"nullable": true
}
}
},
"HealthStatus": {
"type": "object",
"required": [
"all_healthy",
"api",
"compiler",
"runner"
],
"properties": {
"all_healthy": {
"type": "boolean"
},
"api": {
"$ref": "#/components/schemas/ServiceStatus"
},
"compiler": {
"$ref": "#/components/schemas/ServiceStatus"
},
"runner": {
"$ref": "#/components/schemas/ServiceStatus"
}
}
},
"HttpInputConfig": {
"type": "object",
"description": "Configuration for reading data via HTTP.\n\nHTTP input adapters cannot be usefully configured as part of pipeline\nconfiguration. Instead, instantiate them through the REST API as\n`/pipelines/{pipeline_name}/ingress/{table_name}`.",
"required": [
"name"
],
"properties": {
"name": {
"type": "string",
"description": "Autogenerated name."
}
}
},
"IcebergCatalogType": {
"type": "string",
"enum": [
"rest",
"glue"
]
},
"IcebergIngestMode": {
"type": "string",
"description": "Iceberg table read mode.\n\nThree options are available:\n\n* `snapshot` - read a snapshot of the table and stop.\n\n* `follow` - continuously ingest changes to the table, starting from a specified snapshot\nor timestamp.\n\n* `snapshot_and_follow` - read a snapshot of the table before switching to continuous ingestion\nmode.",
"enum": [
"snapshot",
"follow",
"snapshot_and_follow"
]
},
"IcebergReaderConfig": {
"allOf": [
{
"$ref": "#/components/schemas/GlueCatalogConfig"
},
{
"$ref": "#/components/schemas/RestCatalogConfig"
},
{
"type": "object",
"required": [
"mode"
],
"properties": {
"catalog_type": {
"allOf": [
{
"$ref": "#/components/schemas/IcebergCatalogType"
}
],
"nullable": true
},
"datetime": {
"type": "string",
"description": "Optional timestamp for the snapshot in the ISO-8601/RFC-3339 format, e.g.,\n\"2024-12-09T16:09:53+00:00\".\n\nWhen this option is set, the connector finds and opens the snapshot of the table as of the\nspecified point in time (based on the server time recorded in the transaction\nlog, not the event time encoded in the data). In `snapshot` and `snapshot_and_follow`\nmodes, it retrieves this snapshot. In `follow` and `snapshot_and_follow` modes, it\nfollows transaction log records **after** this snapshot.\n\nNote: at most one of `snapshot_id` and `datetime` options can be specified.\nWhen neither of the two options is specified, the latest committed version of the table\nis used.",
"nullable": true
},
"metadata_location": {
"type": "string",
"description": "Location of the table metadata JSON file.\n\nThis propery is used to access an Iceberg table without a catalog. It is mutually\nexclusive with the `catalog_type` property.",
"nullable": true
},
"mode": {
"$ref": "#/components/schemas/IcebergIngestMode"
},
"snapshot_filter": {
"type": "string",
"description": "Optional row filter.\n\nThis option is only valid when `mode` is set to `snapshot` or `snapshot_and_follow`.\n\nWhen specified, only rows that satisfy the filter condition are included in the\nsnapshot. The condition must be a valid SQL Boolean expression that can be used in\nthe `where` clause of the `select * from snapshot where ...` query.\n\nThis option can be used to specify the range of event times to include in the snapshot,\ne.g.: `ts BETWEEN '2005-01-01 00:00:00' AND '2010-12-31 23:59:59'`.",
"nullable": true
},
"snapshot_id": {
"type": "integer",
"format": "int64",
"description": "Optional snapshot id.\n\nWhen this option is set, the connector finds the specified snapshot of the table.\nIn `snapshot` and `snapshot_and_follow` modes, it loads this snapshot.\nIn `follow` and `snapshot_and_follow` modes, it follows table updates\n**after** this snapshot.\n\nNote: at most one of `snapshot_id` and `datetime` options can be specified.\nWhen neither of the two options is specified, the latest committed version of the table\nis used.",
"nullable": true
},
"table_name": {
"type": "string",
"description": "Specifies the Iceberg table name in the \"namespace.table\" format.\n\nThis option is applicable when an Iceberg catalog is configured using the `catalog_type` property.",
"nullable": true
},
"timestamp_column": {
"type": "string",
"description": "Table column that serves as an event timestamp.\n\nWhen this option is specified, and `mode` is one of `snapshot` or `snapshot_and_follow`,\ntable rows are ingested in the timestamp order, respecting the\n[`LATENESS`](https://docs.feldera.com/sql/streaming#lateness-expressions)\nproperty of the column: each ingested row has a timestamp no more than `LATENESS`\ntime units earlier than the most recent timestamp of any previously ingested row.\nThe ingestion is performed by partitioning the table into timestamp ranges of width\n`LATENESS`. Each range is processed sequentially, in increasing timestamp order.\n\n# Example\n\nConsider a table with timestamp column of type `TIMESTAMP` and lateness attribute\n`INTERVAL 1 DAY`. Assuming that the oldest timestamp in the table is\n`2024-01-01T00:00:00``, the connector will fetch all records with timestamps\nfrom `2024-01-01`, then all records for `2024-01-02`, `2024-01-03`, etc., until all records\nin the table have been ingested.\n\n# Requirements\n\n* The timestamp column must be of a supported type: integer, `DATE`, or `TIMESTAMP`.\n* The timestamp column must be declared with non-zero `LATENESS`.\n* For efficient ingest, the table must be optimized for timestamp-based\nqueries using partitioning, Z-ordering, or liquid clustering.",
"nullable": true
}
},
"additionalProperties": {
"type": "string",
"description": "Storage options for configuring backend object store.\n\nSee the [list of available options in PyIceberg documentation](https://py.iceberg.apache.org/configuration/#fileio)."
}
}
],
"description": "Iceberg input connector configuration."
},
"InputEndpointConfig": {
"allOf": [
{
"$ref": "#/components/schemas/ConnectorConfig"
},
{
"type": "object",
"required": [
"stream"
],
"properties": {
"stream": {
"type": "string",
"description": "The name of the input stream of the circuit that this endpoint is\nconnected to."
}
}
}
],
"description": "Describes an input connector configuration"
},
"InputEndpointMetrics": {
"type": "object",
"description": "Performance metrics for an input endpoint.",
"required": [
"total_bytes",
"total_records",
"buffered_records",
"buffered_bytes",
"num_transport_errors",
"num_parse_errors",
"end_of_input"
],
"properties": {
"buffered_bytes": {
"type": "integer",
"format": "int64",
"description": "Number of bytes currently buffered by the endpoint (not yet consumed by the circuit).",
"minimum": 0
},
"buffered_records": {
"type": "integer",
"format": "int64",
"description": "Number of records currently buffered by the endpoint (not yet consumed by the circuit).",
"minimum": 0
},
"end_of_input": {
"type": "boolean",
"description": "True if end-of-input has been signaled."
},
"num_parse_errors": {
"type": "integer",
"format": "int64",
"description": "Number of parse errors.",
"minimum": 0
},
"num_transport_errors": {
"type": "integer",
"format": "int64",
"description": "Number of transport errors.",
"minimum": 0
},
"total_bytes": {
"type": "integer",
"format": "int64",
"description": "Total bytes pushed to the endpoint since it was created.",
"minimum": 0
},
"total_records": {
"type": "integer",
"format": "int64",
"description": "Total records pushed to the endpoint since it was created.",
"minimum": 0
}
}
},
"InputEndpointStatus": {
"type": "object",
"description": "Input endpoint status information.",
"required": [
"endpoint_name",
"config",
"metrics",
"paused",
"barrier"
],
"properties": {
"barrier": {
"type": "boolean",
"description": "Endpoint is currently a barrier to checkpointing and suspend."
},
"completed_frontier": {
"allOf": [
{
"$ref": "#/components/schemas/CompletedWatermark"
}
],
"nullable": true
},
"config": {
"$ref": "#/components/schemas/ShortEndpointConfig"
},
"endpoint_name": {
"type": "string",
"description": "Endpoint name."
},
"fatal_error": {
"type": "string",
"description": "The first fatal error that occurred at the endpoint.",
"nullable": true
},
"metrics": {
"$ref": "#/components/schemas/InputEndpointMetrics"
},
"paused": {
"type": "boolean",
"description": "Endpoint has been paused by the user."
}
}
},
"IntervalUnit": {
"type": "string",
"description": "The specified units for SQL Interval types.\n\n`INTERVAL 1 DAY`, `INTERVAL 1 DAY TO HOUR`, `INTERVAL 1 DAY TO MINUTE`,\nwould yield `Day`, `DayToHour`, `DayToMinute`, as the `IntervalUnit` respectively.",
"enum": [
"Day",
"DayToHour",
"DayToMinute",
"DayToSecond",
"Hour",
"HourToMinute",
"HourToSecond",
"Minute",
"MinuteToSecond",
"Month",
"Second",
"Year",
"YearToMonth"
]
},
"JsonLines": {
"type": "string",
"description": "Whether JSON values can span multiple lines.",
"enum": [
"multiple",
"single"
]
},
"JsonUpdateFormat": {
"type": "string",
"description": "Supported JSON data change event formats.\n\nEach element in a JSON-formatted input stream specifies\nan update to one or more records in an input table. We support\nseveral different ways to represent such updates.\n\n### `InsertDelete`\n\nEach element in the input stream consists of an \"insert\" or \"delete\"\ncommand and a record to be inserted to or deleted from the input table.\n\n```json\n{\"insert\": {\"column1\": \"hello, world!\", \"column2\": 100}}\n```\n\n### `Weighted`\n\nEach element in the input stream consists of a record and a weight\nwhich indicates how many times the row appears.\n\n```json\n{\"weight\": 2, \"data\": {\"column1\": \"hello, world!\", \"column2\": 100}}\n```\n\nNote that the line above would be equivalent to the following input in the `InsertDelete` format:\n\n```json\n{\"insert\": {\"column1\": \"hello, world!\", \"column2\": 100}}\n{\"insert\": {\"column1\": \"hello, world!\", \"column2\": 100}}\n```\n\nSimilarly, negative weights are equivalent to deletions:\n\n```json\n{\"weight\": -1, \"data\": {\"column1\": \"hello, world!\", \"column2\": 100}}\n```\n\nis equivalent to in the `InsertDelete` format:\n\n```json\n{\"delete\": {\"column1\": \"hello, world!\", \"column2\": 100}}\n```\n\n### `Debezium`\n\nDebezium CDC format. Refer to [Debezium input connector documentation](https://docs.feldera.com/connectors/sources/debezium) for details.\n\n### `Snowflake`\n\nUses flat structure so that fields can get parsed directly into SQL\ncolumns. Defines three metadata fields:\n\n* `__action` - \"insert\" or \"delete\"\n* `__stream_id` - unique 64-bit ID of the output stream (records within\na stream are totally ordered)\n* `__seq_number` - monotonically increasing sequence number relative to\nthe start of the stream.\n\n```json\n{\"PART\":1,\"VENDOR\":2,\"EFFECTIVE_SINCE\":\"2019-05-21\",\"PRICE\":\"10000\",\"__action\":\"insert\",\"__stream_id\":4523666124030717756,\"__seq_number\":1}\n```\n\n### `Raw`\n\nThis format is suitable for insert-only streams (no deletions).\nEach element in the input stream contains a record without any\nadditional envelope that gets inserted in the input table.",
"enum": [
"insert_delete",
"weighted",
"debezium",
"snowflake",
"raw",
"redis"
]
},
"KafkaHeader": {
"type": "object",
"description": "Kafka message header.",
"required": [
"key"
],
"properties": {
"key": {
"type": "string"
},
"value": {
"allOf": [
{
"$ref": "#/components/schemas/KafkaHeaderValue"
}
],
"nullable": true
}
}
},
"KafkaHeaderValue": {
"type": "string",
"format": "binary",
"description": "Kafka header value encoded as a UTF-8 string or a byte array."
},
"KafkaInputConfig": {
"type": "object",
"description": "Configuration for reading data from Kafka topics with `InputTransport`.",
"required": [
"topic",
"resume_earliest_if_data_expires"
],
"properties": {
"group_join_timeout_secs": {
"type": "integer",
"format": "int32",
"description": "Maximum timeout in seconds to wait for the endpoint to join the Kafka\nconsumer group during initialization.",
"minimum": 0
},
"include_headers": {
"type": "boolean",
"description": "Whether to include Kafka headers in the record metadata.\n\nWhen `true`, Kafka message headers are available via the `CONNECTOR_METADATA()` function.\nSee <https://docs.feldera.com/connectors/sources/kafka#metadata> for details.",
"nullable": true
},
"include_offset": {
"type": "boolean",
"description": "Whether to include Kafka message offset in the record metadata.\n\nWhen `true`, Kafka message offset is available via the `CONNECTOR_METADATA()` function.\nSee <https://docs.feldera.com/connectors/sources/kafka#metadata> for details.",
"nullable": true
},
"include_partition": {
"type": "boolean",
"description": "Whether to include Kafka partition in the record metadata.\n\nWhen `true`, Kafka partition from which the message was read is available via the `CONNECTOR_METADATA()` function.\nSee <https://docs.feldera.com/connectors/sources/kafka#metadata> for details.",
"nullable": true
},
"include_timestamp": {
"type": "boolean",
"description": "Whether to include Kafka message timestamp in the record metadata.\n\nWhen `true`, Kafka message timestamp is available via the `CONNECTOR_METADATA()` function.\nSee <https://docs.feldera.com/connectors/sources/kafka#metadata> for details.",
"nullable": true
},
"include_topic": {
"type": "boolean",
"description": "Whether to include Kafka topic in the record metadata.\n\nWhen `true`, Kafka topic from which the message was read is available via the `CONNECTOR_METADATA()` function.\nSee <https://docs.feldera.com/connectors/sources/kafka#metadata> for details.",
"nullable": true
},
"log_level": {
"allOf": [
{
"$ref": "#/components/schemas/KafkaLogLevel"
}
],
"nullable": true
},
"partitions": {
"type": "array",
"items": {
"type": "integer",
"format": "int32"
},
"description": "The list of Kafka partitions to read from.\n\nOnly the specified partitions will be consumed. If this field is not set,\nthe connector will consume from all available partitions.\n\nIf `start_from` is set to `offsets` and this field is provided, the\nnumber of partitions must exactly match the number of offsets, and the\norder of partitions must correspond to the order of offsets.\n\nIf offsets are provided for all partitions, this field can be omitted.",
"nullable": true
},
"poller_threads": {
"type": "integer",
"description": "Set to 1 or more to fix the number of threads used to poll\n`rdkafka`. Multiple threads can increase performance with small Kafka\nmessages; for large messages, one thread is enough. In either case, too\nmany threads can harm performance. If unset, the default is 3, which\nhelps with small messages but will not harm performance with large\nmessagee",
"nullable": true,
"minimum": 0
},
"region": {
"type": "string",
"description": "The AWS region to use while connecting to AWS Managed Streaming for Kafka (MSK).",
"nullable": true
},
"resume_earliest_if_data_expires": {
"type": "boolean",
"description": "By default, if the input connector resumes from a checkpoint and the\ndata where it needs to resume has expired from the Kafka topic, the\ninput connector fails initialization and the pipeline will fail to start.\n\nSet this to true to change the behavior so that, if data is not\navailable on resume, the input connector starts from the earliest\noffsets that are now available."
},
"start_from": {
"$ref": "#/components/schemas/KafkaStartFromConfig"
},
"topic": {
"type": "string",
"description": "Topic to subscribe to."
}
},
"additionalProperties": {
"type": "string",
"description": "Options passed directly to `rdkafka`.\n\n[`librdkafka` options](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md)\nused to configure the Kafka consumer.\n\nThis input connector does not use consumer groups, so options related to\nconsumer groups are rejected, including:\n\n* `group.id`, if present, is ignored.\n* `auto.offset.reset` (use `start_from` instead).\n* \"enable.auto.commit\", if present, must be set to \"false\".\n* \"enable.auto.offset.store\", if present, must be set to \"false\"."
}
},
"KafkaLogLevel": {
"type": "string",
"description": "Kafka logging levels.",
"enum": [
"emerg",
"alert",
"critical",
"error",
"warning",
"notice",
"info",
"debug"
]
},
"KafkaOutputConfig": {
"type": "object",
"description": "Configuration for writing data to a Kafka topic with `OutputTransport`.",
"required": [
"topic"
],
"properties": {
"fault_tolerance": {
"allOf": [
{
"$ref": "#/components/schemas/KafkaOutputFtConfig"
}
],
"nullable": true
},
"headers": {
"type": "array",
"items": {
"$ref": "#/components/schemas/KafkaHeader"
},
"description": "Kafka headers to be added to each message produced by this connector."
},
"initialization_timeout_secs": {
"type": "integer",
"format": "int32",
"description": "Maximum timeout in seconds to wait for the endpoint to connect to\na Kafka broker.\n\nDefaults to 60.",
"minimum": 0
},
"kafka_service": {
"type": "string",
"description": "If specified, this service is used to provide defaults for the Kafka options.",
"nullable": true
},
"log_level": {
"allOf": [
{
"$ref": "#/components/schemas/KafkaLogLevel"
}
],
"nullable": true
},
"region": {
"type": "string",
"description": "The AWS region to use while connecting to AWS Managed Streaming for Kafka (MSK).",
"nullable": true
},
"topic": {
"type": "string",
"description": "Topic to write to."
}
},
"additionalProperties": {
"type": "string",
"description": "Options passed directly to `rdkafka`.\n\nSee [`librdkafka` options](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md)\nused to configure the Kafka producer."
}
},
"KafkaOutputFtConfig": {
"type": "object",
"description": "Fault tolerance configuration for Kafka output connector.",
"properties": {
"consumer_options": {
"type": "object",
"description": "Options passed to `rdkafka` for consumers only, as documented at\n[`librdkafka`\noptions](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md).\n\nThese options override `kafka_options` for consumers, and may be empty.",
"default": {},
"additionalProperties": {
"type": "string"
}
},
"producer_options": {
"type": "object",
"description": "Options passed to `rdkafka` for producers only, as documented at\n[`librdkafka`\noptions](https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md).\n\nThese options override `kafka_options` for producers, and may be empty.",
"default": {},
"additionalProperties": {
"type": "string"
}
}
}
},
"KafkaStartFromConfig": {
"oneOf": [
{
"type": "string",
"description": "Start from the beginning of the topic.",
"enum": [
"earliest"
]
},
{
"type": "string",
"description": "Start from the current end of the topic.\n\nThis will only read any data that is added to the topic after the\nconnector initializes.",
"enum": [
"latest"
]
},
{
"type": "object",
"required": [
"offsets"
],
"properties": {
"offsets": {
"type": "array",
"items": {
"type": "integer",
"format": "int64"
},
"description": "Start from particular offsets in the topic.\n\nThe number of offsets must match the number of partitions in the topic."
}
}
},
{
"type": "object",
"required": [
"timestamp"
],
"properties": {
"timestamp": {
"type": "integer",
"format": "int64",
"description": "Start from a particular timestamp in the topic.\n\nKafka timestamps are in milliseconds since the epoch."
}
}
}
],
"description": "Where to begin reading a Kafka topic."
},
"LicenseInformation": {
"type": "object",
"required": [
"current",
"is_trial",
"description_html",
"remind_schedule"
],
"properties": {
"current": {
"type": "string",
"format": "date-time",
"description": "Timestamp when the server responded."
},
"description_html": {
"type": "string",
"description": "Optional description of the advantages of extending the license / upgrading from a trial"
},
"extension_url": {
"type": "string",
"description": "URL that navigates the user to extend / upgrade their license",
"nullable": true
},
"is_trial": {
"type": "boolean",
"description": "Whether the license is a trial"
},
"remind_schedule": {
"$ref": "#/components/schemas/DisplaySchedule"
},
"remind_starting_at": {
"type": "string",
"format": "date-time",
"description": "Timestamp from which the user should be reminded of the license expiring soon",
"nullable": true
},
"valid_until": {
"type": "string",
"format": "date-time",
"description": "Timestamp at which point the license expires",
"nullable": true
}
}
},
"LicenseValidity": {
"oneOf": [
{
"type": "object",
"required": [
"Exists"
],
"properties": {
"Exists": {
"$ref": "#/components/schemas/LicenseInformation"
}
}
},
{
"type": "object",
"required": [
"DoesNotExistOrNotConfirmed"
],
"properties": {
"DoesNotExistOrNotConfirmed": {
"type": "string",
"description": "Either the license key is invalid according to the server, or the request that checks with\nthe server failed (e.g., if it could not reach the server)."
}
}
}
]
},
"MetricsFormat": {
"type": "string",
"description": "Circuit metrics output format.\n- `prometheus`: [format](https://github.com/prometheus/docs/blob/4b1b80f5f660a2f8dc25a54f52a65a502f31879a/docs/instrumenting/exposition_formats.md) expected by Prometheus\n- `json`: JSON format",
"enum": [
"prometheus",
"json"
]
},
"MetricsParameters": {
"type": "object",
"description": "Query parameters to retrieve pipeline circuit metrics.",
"properties": {
"format": {
"$ref": "#/components/schemas/MetricsFormat"
}
}
},
"MirInput": {
"type": "object",
"required": [
"node",
"output"
],
"properties": {
"node": {
"type": "string"
},
"output": {
"type": "integer",
"minimum": 0
}
},
"additionalProperties": {}
},
"MirNode": {
"type": "object",
"required": [
"operation"
],
"properties": {
"calcite": {
"allOf": [
{
"$ref": "#/components/schemas/CalciteId"
}
],
"nullable": true
},
"inputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/MirInput"
}
},
"operation": {
"type": "string"
},
"outputs": {
"type": "array",
"items": {
"allOf": [
{
"$ref": "#/components/schemas/MirInput"
}
],
"nullable": true
},
"nullable": true
},
"persistent_id": {
"type": "string",
"nullable": true
},
"positions": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SourcePosition"
}
},
"table": {
"type": "string",
"nullable": true
},
"view": {
"type": "string",
"nullable": true
}
},
"additionalProperties": {}
},
"MonitorStatus": {
"type": "string",
"enum": [
"InitialUnhealthy",
"Unhealthy",
"Healthy"
]
},
"MultihostConfig": {
"type": "object",
"description": "Configuration for a multihost Feldera pipeline.\n\nThis configuration is primarily for the coordinator.",
"required": [
"hosts"
],
"properties": {
"hosts": {
"type": "integer",
"description": "Number of hosts to launch.\n\nFor the configuration to be truly multihost, this should be at least 2.\nA value of 1 still runs the multihost coordinator but it only\ncoordinates a single host.",
"minimum": 0
}
}
},
"NatsInputConfig": {
"type": "object",
"required": [
"connection_config",
"stream_name",
"consumer_config"
],
"properties": {
"connection_config": {
"$ref": "#/components/schemas/ConnectOptions"
},
"consumer_config": {
"$ref": "#/components/schemas/ConsumerConfig"
},
"stream_name": {
"type": "string"
}
}
},
"NewApiKeyRequest": {
"type": "object",
"description": "Request to create a new API key.",
"required": [
"name"
],
"properties": {
"name": {
"type": "string",
"description": "Key name.",
"example": "my-api-key"
}
}
},
"NewApiKeyResponse": {
"type": "object",
"description": "Response to a successful API key creation.",
"required": [
"id",
"name",
"api_key"
],
"properties": {
"api_key": {
"type": "string",
"description": "Generated secret API key. There is no way to retrieve this\nkey again through the API, so store it securely.",
"example": "apikey:v5y5QNtlPNVMwkmNjKwFU8bbIu5lMge3yHbyddxAOdXlEo84SEoNn32DUhQaf1KLeI9aOOfnJjhQ1pYzMrU4wQXON6pm6BS7Zgzj46U2b8pwz1280vYBEtx41hiDBRP"
},
"id": {
"$ref": "#/components/schemas/ApiKeyId"
},
"name": {
"type": "string",
"description": "API key name provided by the user.",
"example": "my-api-key"
}
}
},
"NexmarkInputConfig": {
"type": "object",
"description": "Configuration for generating Nexmark input data.\n\nThis connector must be used exactly three times in a pipeline if it is used\nat all, once for each [`NexmarkTable`].",
"required": [
"table"
],
"properties": {
"options": {
"allOf": [
{
"$ref": "#/components/schemas/NexmarkInputOptions"
}
],
"nullable": true
},
"table": {
"$ref": "#/components/schemas/NexmarkTable"
}
}
},
"NexmarkInputOptions": {
"type": "object",
"description": "Configuration for generating Nexmark input data.",
"properties": {
"batch_size_per_thread": {
"type": "integer",
"format": "int64",
"description": "Number of events to generate and submit together, per thread.\n\nEach thread generates this many records, which are then combined with\nthe records generated by the other threads, to form combined input\nbatches of size `threads × batch_size_per_thread`.",
"default": 1000,
"minimum": 0
},
"events": {
"type": "integer",
"format": "int64",
"description": "Number of events to generate.",
"default": 100000000,
"minimum": 0
},
"max_step_size_per_thread": {
"type": "integer",
"format": "int64",
"description": "Maximum number of events to submit in a single step, per thread.\n\nThis should really be per worker thread, not per generator thread, but\nthe connector does not know how many worker threads there are.\n\nThis stands in for `max_batch_size` from the connector configuration\nbecause it must be a constant across all three of the nexmark tables.",
"default": 10000,
"minimum": 0
},
"threads": {
"type": "integer",
"description": "Number of event generator threads.\n\nIt's reasonable to choose the same number of generator threads as worker\nthreads.",
"default": 4,
"minimum": 0
}
}
},
"NexmarkTable": {
"type": "string",
"description": "Table in Nexmark.",
"enum": [
"bid",
"auction",
"person"
]
},
"ObjectStorageConfig": {
"type": "object",
"required": [
"url"
],
"properties": {
"url": {
"type": "string",
"description": "URL.\n\nThe following URL schemes are supported:\n\n* S3:\n- `s3://<bucket>/<path>`\n- `s3a://<bucket>/<path>`\n- `https://s3.<region>.amazonaws.com/<bucket>`\n- `https://<bucket>.s3.<region>.amazonaws.com`\n- `https://ACCOUNT_ID.r2.cloudflarestorage.com/bucket`\n* Google Cloud Storage:\n- `gs://<bucket>/<path>`\n* Microsoft Azure Blob Storage:\n- `abfs[s]://<container>/<path>` (according to [fsspec](https://github.com/fsspec/adlfs))\n- `abfs[s]://<file_system>@<account_name>.dfs.core.windows.net/<path>`\n- `abfs[s]://<file_system>@<account_name>.dfs.fabric.microsoft.com/<path>`\n- `az://<container>/<path>` (according to [fsspec](https://github.com/fsspec/adlfs))\n- `adl://<container>/<path>` (according to [fsspec](https://github.com/fsspec/adlfs))\n- `azure://<container>/<path>` (custom)\n- `https://<account>.dfs.core.windows.net`\n- `https://<account>.blob.core.windows.net`\n- `https://<account>.blob.core.windows.net/<container>`\n- `https://<account>.dfs.fabric.microsoft.com`\n- `https://<account>.dfs.fabric.microsoft.com/<container>`\n- `https://<account>.blob.fabric.microsoft.com`\n- `https://<account>.blob.fabric.microsoft.com/<container>`\n\nSettings derived from the URL will override other settings."
}
},
"additionalProperties": {
"type": "string",
"description": "Additional options as key-value pairs.\n\nThe following keys are supported:\n\n* S3:\n- `access_key_id`: AWS Access Key.\n- `secret_access_key`: AWS Secret Access Key.\n- `region`: Region.\n- `default_region`: Default region.\n- `endpoint`: Custom endpoint for communicating with S3,\ne.g. `https://localhost:4566` for testing against a localstack\ninstance.\n- `token`: Token to use for requests (passed to underlying provider).\n- [Other keys](https://docs.rs/object_store/latest/object_store/aws/enum.AmazonS3ConfigKey.html#variants).\n* Google Cloud Storage:\n- `service_account`: Path to the service account file.\n- `service_account_key`: The serialized service account key.\n- `google_application_credentials`: Application credentials path.\n- [Other keys](https://docs.rs/object_store/latest/object_store/gcp/enum.GoogleConfigKey.html).\n* Microsoft Azure Blob Storage:\n- `access_key`: Azure Access Key.\n- `container_name`: Azure Container Name.\n- `account`: Azure Account.\n- `bearer_token_authorization`: Static bearer token for authorizing requests.\n- `client_id`: Client ID for use in client secret or Kubernetes federated credential flow.\n- `client_secret`: Client secret for use in client secret flow.\n- `tenant_id`: Tenant ID for use in client secret or Kubernetes federated credential flow.\n- `endpoint`: Override the endpoint for communicating with blob storage.\n- [Other keys](https://docs.rs/object_store/latest/object_store/azure/enum.AzureConfigKey.html#variants).\n\nOptions set through the URL take precedence over those set with these\noptions."
}
},
"Op": {
"type": "object",
"required": [
"kind",
"name",
"syntax"
],
"properties": {
"kind": {
"type": "string"
},
"name": {
"type": "string"
},
"syntax": {
"type": "string"
}
},
"additionalProperties": {}
},
"Operand": {
"type": "object",
"properties": {
"input": {
"type": "integer",
"nullable": true,
"minimum": 0
},
"name": {
"type": "string",
"nullable": true
}
},
"additionalProperties": {}
},
"OutputBufferConfig": {
"type": "object",
"properties": {
"enable_output_buffer": {
"type": "boolean",
"description": "Enable output buffering.\n\nThe output buffering mechanism allows decoupling the rate at which the pipeline\npushes changes to the output transport from the rate of input changes.\n\nBy default, output updates produced by the pipeline are pushed directly to\nthe output transport. Some destinations may prefer to receive updates in fewer\nbigger batches. For instance, when writing Parquet files, producing\none bigger file every few minutes is usually better than creating\nsmall files every few milliseconds.\n\nTo achieve such input/output decoupling, users can enable output buffering by\nsetting the `enable_output_buffer` flag to `true`. When buffering is enabled, output\nupdates produced by the pipeline are consolidated in an internal buffer and are\npushed to the output transport when one of several conditions is satisfied:\n\n* data has been accumulated in the buffer for more than `max_output_buffer_time_millis`\nmilliseconds.\n* buffer size exceeds `max_output_buffer_size_records` records.\n\nThis flag is `false` by default.",
"default": false
},
"max_output_buffer_size_records": {
"type": "integer",
"description": "Maximum number of updates to be kept in the output buffer.\n\nThis parameter bounds the maximal size of the buffer.\nNote that the size of the buffer is not always equal to the\ntotal number of updates output by the pipeline. Updates to the\nsame record can overwrite or cancel previous updates.\n\nBy default, the buffer can grow indefinitely until one of\nthe other output conditions is satisfied.\n\nNOTE: this configuration option requires the `enable_output_buffer` flag\nto be set.",
"default": 18446744073709551615,
"minimum": 0
},
"max_output_buffer_time_millis": {
"type": "integer",
"description": "Maximum time in milliseconds data is kept in the output buffer.\n\nBy default, data is kept in the buffer indefinitely until one of\nthe other output conditions is satisfied. When this option is\nset the buffer will be flushed at most every\n`max_output_buffer_time_millis` milliseconds.\n\nNOTE: this configuration option requires the `enable_output_buffer` flag\nto be set.",
"default": 18446744073709551615,
"minimum": 0
}
}
},
"OutputEndpointConfig": {
"allOf": [
{
"$ref": "#/components/schemas/ConnectorConfig"
},
{
"type": "object",
"required": [
"stream"
],
"properties": {
"stream": {
"type": "string",
"description": "The name of the output stream of the circuit that this endpoint is\nconnected to."
}
}
}
],
"description": "Describes an output connector configuration"
},
"OutputEndpointMetrics": {
"type": "object",
"description": "Performance metrics for an output endpoint.",
"required": [
"transmitted_records",
"transmitted_bytes",
"queued_records",
"queued_batches",
"buffered_records",
"buffered_batches",
"num_encode_errors",
"num_transport_errors",
"total_processed_input_records",
"memory"
],
"properties": {
"buffered_batches": {
"type": "integer",
"format": "int64",
"description": "Number of batches in the buffer.",
"minimum": 0
},
"buffered_records": {
"type": "integer",
"format": "int64",
"description": "Number of records pushed to the output buffer.",
"minimum": 0
},
"memory": {
"type": "integer",
"format": "int64",
"description": "Extra memory in use beyond that used for queuing records.",
"minimum": 0
},
"num_encode_errors": {
"type": "integer",
"format": "int64",
"description": "Number of encoding errors.",
"minimum": 0
},
"num_transport_errors": {
"type": "integer",
"format": "int64",
"description": "Number of transport errors.",
"minimum": 0
},
"queued_batches": {
"type": "integer",
"format": "int64",
"description": "Number of queued batches.",
"minimum": 0
},
"queued_records": {
"type": "integer",
"format": "int64",
"description": "Number of queued records.",
"minimum": 0
},
"total_processed_input_records": {
"type": "integer",
"format": "int64",
"description": "The number of input records processed by the circuit.",
"minimum": 0
},
"transmitted_bytes": {
"type": "integer",
"format": "int64",
"description": "Bytes sent on the underlying transport.",
"minimum": 0
},
"transmitted_records": {
"type": "integer",
"format": "int64",
"description": "Records sent on the underlying transport.",
"minimum": 0
}
}
},
"OutputEndpointStatus": {
"type": "object",
"description": "Output endpoint status information.",
"required": [
"endpoint_name",
"config",
"metrics"
],
"properties": {
"config": {
"$ref": "#/components/schemas/ShortEndpointConfig"
},
"endpoint_name": {
"type": "string",
"description": "Endpoint name."
},
"fatal_error": {
"type": "string",
"description": "The first fatal error that occurred at the endpoint.",
"nullable": true
},
"metrics": {
"$ref": "#/components/schemas/OutputEndpointMetrics"
}
}
},
"PartialProgramInfo": {
"type": "object",
"description": "Program information is the result of the SQL compilation.",
"required": [
"schema",
"udf_stubs",
"input_connectors",
"output_connectors"
],
"properties": {
"input_connectors": {
"type": "object",
"description": "Input connectors derived from the schema.",
"additionalProperties": {
"$ref": "#/components/schemas/InputEndpointConfig"
}
},
"output_connectors": {
"type": "object",
"description": "Output connectors derived from the schema.",
"additionalProperties": {
"$ref": "#/components/schemas/OutputEndpointConfig"
}
},
"schema": {
"$ref": "#/components/schemas/ProgramSchema"
},
"udf_stubs": {
"type": "string",
"description": "Generated user defined function (UDF) stubs Rust code: stubs.rs"
}
}
},
"PatchPipeline": {
"type": "object",
"description": "Partially update the pipeline (PATCH).\n\nNote that the patching only applies to the main fields, not subfields.\nFor instance, it is not possible to update only the number of workers;\nit is required to again pass the whole runtime configuration with the\nchange.",
"properties": {
"description": {
"type": "string",
"nullable": true
},
"name": {
"type": "string",
"nullable": true
},
"program_code": {
"type": "string",
"nullable": true
},
"program_config": {
"allOf": [
{
"$ref": "#/components/schemas/ProgramConfig"
}
],
"nullable": true
},
"runtime_config": {
"allOf": [
{
"$ref": "#/components/schemas/RuntimeConfig"
}
],
"nullable": true
},
"udf_rust": {
"type": "string",
"nullable": true
},
"udf_toml": {
"type": "string",
"nullable": true
}
}
},
"PermanentSuspendError": {
"oneOf": [
{
"type": "string",
"enum": [
"StorageRequired"
]
},
{
"type": "string",
"enum": [
"EnterpriseFeature"
]
},
{
"type": "object",
"required": [
"UnsupportedInputEndpoint"
],
"properties": {
"UnsupportedInputEndpoint": {
"type": "string"
}
}
}
],
"description": "Reasons why a pipeline does not support suspend and resume operations."
},
"PipelineConfig": {
"allOf": [
{
"type": "object",
"description": "Global pipeline configuration settings. This is the publicly\nexposed type for users to configure pipelines.",
"properties": {
"checkpoint_during_suspend": {
"type": "boolean",
"description": "Deprecated: setting this true or false does not have an effect anymore.",
"default": true
},
"clock_resolution_usecs": {
"type": "integer",
"format": "int64",
"description": "Real-time clock resolution in microseconds.\n\nThis parameter controls the execution of queries that use the `NOW()` function. The output of such\nqueries depends on the real-time clock and can change over time without any external\ninputs. If the query uses `NOW()`, the pipeline will update the clock value and trigger incremental\nrecomputation at most each `clock_resolution_usecs` microseconds. If the query does not use\n`NOW()`, then clock value updates are suppressed and the pipeline ignores this setting.\n\nIt is set to 1 second (1,000,000 microseconds) by default.",
"default": 1000000,
"nullable": true,
"minimum": 0
},
"cpu_profiler": {
"type": "boolean",
"description": "Enable CPU profiler.\n\nThe default value is `true`.",
"default": true
},
"dev_tweaks": {
"type": "object",
"description": "Optional settings for tweaking Feldera internals.\n\nThe available key-value pairs change from one version of Feldera to\nanother, so users should not depend on particular settings being\navailable, or on their behavior.",
"default": {},
"additionalProperties": {}
},
"fault_tolerance": {
"allOf": [
{
"$ref": "#/components/schemas/FtConfig"
}
],
"default": {
"model": "none",
"checkpoint_interval_secs": 60
}
},
"hosts": {
"type": "integer",
"description": "Number of DBSP hosts.\n\nThe worker threads are evenly divided among the hosts. For single-host\ndeployments, this should be 1 (the default).\n\nMultihost pipelines are an enterprise-only preview feature.",
"default": 1,
"minimum": 0
},
"http_workers": {
"type": "integer",
"format": "int64",
"description": "Sets the number of available runtime threads for the http server.\n\nIn most cases, this does not need to be set explicitly and\nthe default is sufficient. Can be increased in case the\npipeline HTTP API operations are a bottleneck.\n\nIf not specified, the default is set to `workers`.",
"default": null,
"nullable": true,
"minimum": 0
},
"init_containers": {
"description": "Specification of additional (sidecar) containers.",
"nullable": true
},
"io_workers": {
"type": "integer",
"format": "int64",
"description": "Sets the number of available runtime threads for async IO tasks.\n\nThis affects some networking and file I/O operations\nespecially adapters and ad-hoc queries.\n\nIn most cases, this does not need to be set explicitly and\nthe default is sufficient. Can be increased in case\ningress, egress or ad-hoc queries are a bottleneck.\n\nIf not specified, the default is set to `workers`.",
"default": null,
"nullable": true,
"minimum": 0
},
"logging": {
"type": "string",
"description": "Log filtering directives.\n\nIf set to a valid [tracing-subscriber] filter, this controls the log\nmessages emitted by the pipeline process. Otherwise, or if the filter\nhas invalid syntax, messages at \"info\" severity and higher are written\nto the log and all others are discarded.\n\n[tracing-subscriber]: https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#directives",
"default": null,
"nullable": true
},
"max_buffering_delay_usecs": {
"type": "integer",
"format": "int64",
"description": "Maximal delay in microseconds to wait for `min_batch_size_records` to\nget buffered by the controller, defaults to 0.",
"default": 0,
"minimum": 0
},
"max_parallel_connector_init": {
"type": "integer",
"format": "int64",
"description": "The maximum number of connectors initialized in parallel during pipeline\nstartup.\n\nAt startup, the pipeline must initialize all of its input and output connectors.\nDepending on the number and types of connectors, this can take a long time.\nTo accelerate the process, multiple connectors are initialized concurrently.\nThis option controls the maximum number of connectors that can be initialized\nin parallel.\n\nThe default is 10.",
"default": null,
"nullable": true,
"minimum": 0
},
"min_batch_size_records": {
"type": "integer",
"format": "int64",
"description": "Minimal input batch size.\n\nThe controller delays pushing input records to the circuit until at\nleast `min_batch_size_records` records have been received (total\nacross all endpoints) or `max_buffering_delay_usecs` microseconds\nhave passed since at least one input records has been buffered.\nDefaults to 0.",
"default": 0,
"minimum": 0
},
"pin_cpus": {
"type": "array",
"items": {
"type": "integer",
"minimum": 0
},
"description": "Optionally, a list of CPU numbers for CPUs to which the pipeline may pin\nits worker threads. Specify at least twice as many CPU numbers as\nworkers. CPUs are generally numbered starting from 0. The pipeline\nmight not be able to honor CPU pinning requests.\n\nCPU pinning can make pipelines run faster and perform more consistently,\nas long as different pipelines running on the same machine are pinned to\ndifferent CPUs.",
"default": []
},
"pipeline_template_configmap": {
"allOf": [
{
"$ref": "#/components/schemas/PipelineTemplateConfig"
}
],
"default": null,
"nullable": true
},
"provisioning_timeout_secs": {
"type": "integer",
"format": "int64",
"description": "Timeout in seconds for the `Provisioning` phase of the pipeline.\nSetting this value will override the default of the runner.",
"default": null,
"nullable": true,
"minimum": 0
},
"resources": {
"allOf": [
{
"$ref": "#/components/schemas/ResourceConfig"
}
],
"default": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
}
},
"storage": {
"allOf": [
{
"$ref": "#/components/schemas/StorageOptions"
}
],
"default": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"nullable": true
},
"tracing": {
"type": "boolean",
"description": "Enable pipeline tracing.",
"default": false
},
"tracing_endpoint_jaeger": {
"type": "string",
"description": "Jaeger tracing endpoint to send tracing information to.",
"default": "127.0.0.1:6831"
},
"workers": {
"type": "integer",
"format": "int32",
"description": "Number of DBSP worker threads.\n\nEach DBSP \"foreground\" worker thread is paired with a \"background\"\nthread for LSM merging, making the total number of threads twice the\nspecified number.\n\nThe typical sweet spot for the number of workers is between 4 and 16.\nEach worker increases overall memory consumption for data structures\nused during a step.",
"default": 8,
"minimum": 0
}
}
},
{
"type": "object",
"properties": {
"given_name": {
"type": "string",
"description": "Name given by the tenant to the pipeline. It is only unique within the same tenant, and can\nbe changed by the tenant when the pipeline is stopped.\n\nGiven a specific tenant, it can be used to find/identify a specific pipeline of theirs.",
"nullable": true
},
"inputs": {
"type": "object",
"description": "Input endpoint configuration.",
"additionalProperties": {
"$ref": "#/components/schemas/InputEndpointConfig"
}
},
"multihost": {
"allOf": [
{
"$ref": "#/components/schemas/MultihostConfig"
}
],
"nullable": true
},
"name": {
"type": "string",
"description": "Unique system-generated name of the pipeline (format: `pipeline-<uuid>`).\nIt is unique across all tenants and cannot be changed.\n\nThe `<uuid>` is also used in the naming of various resources that back the pipeline,\nand as such this name is useful to find/identify corresponding resources.",
"nullable": true
},
"outputs": {
"type": "object",
"description": "Output endpoint configuration.",
"additionalProperties": {
"$ref": "#/components/schemas/OutputEndpointConfig"
}
},
"program_ir": {
"allOf": [
{
"$ref": "#/components/schemas/ProgramIr"
}
],
"nullable": true
},
"secrets_dir": {
"type": "string",
"description": "Directory containing values of secrets.\n\nIf this is not set, a default directory is used.",
"nullable": true
},
"storage_config": {
"allOf": [
{
"$ref": "#/components/schemas/StorageConfig"
}
],
"nullable": true
}
}
}
],
"description": "Pipeline deployment configuration.\nIt represents configuration entries directly provided by the user\n(e.g., runtime configuration) and entries derived from the schema\nof the compiled program (e.g., connectors). Storage configuration,\nif applicable, is set by the runner."
},
"PipelineDiff": {
"type": "object",
"description": "Summary of changes in the pipeline between checkpointed and new versions.",
"required": [
"added_input_connectors",
"modified_input_connectors",
"removed_input_connectors",
"added_output_connectors",
"modified_output_connectors",
"removed_output_connectors"
],
"properties": {
"added_input_connectors": {
"type": "array",
"items": {
"type": "string"
}
},
"added_output_connectors": {
"type": "array",
"items": {
"type": "string"
}
},
"modified_input_connectors": {
"type": "array",
"items": {
"type": "string"
}
},
"modified_output_connectors": {
"type": "array",
"items": {
"type": "string"
}
},
"program_diff": {
"allOf": [
{
"$ref": "#/components/schemas/ProgramDiff"
}
],
"nullable": true
},
"program_diff_error": {
"type": "string",
"nullable": true
},
"removed_input_connectors": {
"type": "array",
"items": {
"type": "string"
}
},
"removed_output_connectors": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"PipelineFieldSelector": {
"type": "string",
"enum": [
"all",
"status",
"status_with_connectors"
]
},
"PipelineId": {
"type": "string",
"format": "uuid",
"description": "Pipeline identifier."
},
"PipelineInfo": {
"type": "object",
"description": "Pipeline information.\nIt both includes fields which are user-provided and system-generated.",
"required": [
"id",
"name",
"description",
"created_at",
"version",
"platform_version",
"runtime_config",
"program_code",
"udf_rust",
"udf_toml",
"program_config",
"program_version",
"program_status",
"program_status_since",
"program_error",
"refresh_version",
"storage_status",
"deployment_status",
"deployment_status_since",
"deployment_desired_status",
"deployment_desired_status_since",
"deployment_resources_status",
"deployment_resources_status_since",
"deployment_resources_desired_status",
"deployment_resources_desired_status_since"
],
"properties": {
"created_at": {
"type": "string",
"format": "date-time"
},
"deployment_desired_status": {
"$ref": "#/components/schemas/CombinedDesiredStatus"
},
"deployment_desired_status_since": {
"type": "string",
"format": "date-time"
},
"deployment_error": {
"allOf": [
{
"$ref": "#/components/schemas/ErrorResponse"
}
],
"nullable": true
},
"deployment_id": {
"type": "string",
"format": "uuid",
"nullable": true
},
"deployment_initial": {
"allOf": [
{
"$ref": "#/components/schemas/RuntimeDesiredStatus"
}
],
"nullable": true
},
"deployment_resources_desired_status": {
"$ref": "#/components/schemas/ResourcesDesiredStatus"
},
"deployment_resources_desired_status_since": {
"type": "string",
"format": "date-time"
},
"deployment_resources_status": {
"$ref": "#/components/schemas/ResourcesStatus"
},
"deployment_resources_status_details": {
"nullable": true
},
"deployment_resources_status_since": {
"type": "string",
"format": "date-time"
},
"deployment_runtime_desired_status": {
"allOf": [
{
"$ref": "#/components/schemas/RuntimeDesiredStatus"
}
],
"nullable": true
},
"deployment_runtime_desired_status_since": {
"type": "string",
"format": "date-time",
"nullable": true
},
"deployment_runtime_status": {
"allOf": [
{
"$ref": "#/components/schemas/RuntimeStatus"
}
],
"nullable": true
},
"deployment_runtime_status_details": {
"nullable": true
},
"deployment_runtime_status_since": {
"type": "string",
"format": "date-time",
"nullable": true
},
"deployment_status": {
"$ref": "#/components/schemas/CombinedStatus"
},
"deployment_status_since": {
"type": "string",
"format": "date-time"
},
"description": {
"type": "string"
},
"id": {
"$ref": "#/components/schemas/PipelineId"
},
"name": {
"type": "string"
},
"platform_version": {
"type": "string"
},
"program_code": {
"type": "string"
},
"program_config": {
"$ref": "#/components/schemas/ProgramConfig"
},
"program_error": {
"$ref": "#/components/schemas/ProgramError"
},
"program_info": {
"allOf": [
{
"$ref": "#/components/schemas/PartialProgramInfo"
}
],
"nullable": true
},
"program_status": {
"$ref": "#/components/schemas/ProgramStatus"
},
"program_status_since": {
"type": "string",
"format": "date-time"
},
"program_version": {
"$ref": "#/components/schemas/Version"
},
"refresh_version": {
"$ref": "#/components/schemas/Version"
},
"runtime_config": {
"$ref": "#/components/schemas/RuntimeConfig"
},
"storage_status": {
"$ref": "#/components/schemas/StorageStatus"
},
"udf_rust": {
"type": "string"
},
"udf_toml": {
"type": "string"
},
"version": {
"$ref": "#/components/schemas/Version"
}
}
},
"PipelineSelectedInfo": {
"type": "object",
"description": "Pipeline information which has a selected subset of optional fields.\nIt both includes fields which are user-provided and system-generated.\nIf an optional field is not selected (i.e., is `None`), it will not be serialized.",
"required": [
"id",
"name",
"description",
"created_at",
"version",
"platform_version",
"program_version",
"program_status",
"program_status_since",
"refresh_version",
"storage_status",
"deployment_status",
"deployment_status_since",
"deployment_desired_status",
"deployment_desired_status_since",
"deployment_resources_status",
"deployment_resources_status_since",
"deployment_resources_desired_status",
"deployment_resources_desired_status_since"
],
"properties": {
"connectors": {
"allOf": [
{
"$ref": "#/components/schemas/ConnectorStats"
}
],
"nullable": true
},
"created_at": {
"type": "string",
"format": "date-time"
},
"deployment_desired_status": {
"$ref": "#/components/schemas/CombinedDesiredStatus"
},
"deployment_desired_status_since": {
"type": "string",
"format": "date-time"
},
"deployment_error": {
"allOf": [
{
"$ref": "#/components/schemas/ErrorResponse"
}
],
"nullable": true
},
"deployment_id": {
"type": "string",
"format": "uuid",
"nullable": true
},
"deployment_initial": {
"allOf": [
{
"$ref": "#/components/schemas/RuntimeDesiredStatus"
}
],
"nullable": true
},
"deployment_resources_desired_status": {
"$ref": "#/components/schemas/ResourcesDesiredStatus"
},
"deployment_resources_desired_status_since": {
"type": "string",
"format": "date-time"
},
"deployment_resources_status": {
"$ref": "#/components/schemas/ResourcesStatus"
},
"deployment_resources_status_details": {
"nullable": true
},
"deployment_resources_status_since": {
"type": "string",
"format": "date-time"
},
"deployment_runtime_desired_status": {
"allOf": [
{
"$ref": "#/components/schemas/RuntimeDesiredStatus"
}
],
"nullable": true
},
"deployment_runtime_desired_status_since": {
"type": "string",
"format": "date-time",
"nullable": true
},
"deployment_runtime_status": {
"allOf": [
{
"$ref": "#/components/schemas/RuntimeStatus"
}
],
"nullable": true
},
"deployment_runtime_status_details": {
"nullable": true
},
"deployment_runtime_status_since": {
"type": "string",
"format": "date-time",
"nullable": true
},
"deployment_status": {
"$ref": "#/components/schemas/CombinedStatus"
},
"deployment_status_since": {
"type": "string",
"format": "date-time"
},
"description": {
"type": "string"
},
"id": {
"$ref": "#/components/schemas/PipelineId"
},
"name": {
"type": "string"
},
"platform_version": {
"type": "string"
},
"program_code": {
"type": "string",
"nullable": true
},
"program_config": {
"allOf": [
{
"$ref": "#/components/schemas/ProgramConfig"
}
],
"nullable": true
},
"program_error": {
"allOf": [
{
"$ref": "#/components/schemas/ProgramError"
}
],
"nullable": true
},
"program_info": {
"allOf": [
{
"$ref": "#/components/schemas/PartialProgramInfo"
}
],
"nullable": true
},
"program_status": {
"$ref": "#/components/schemas/ProgramStatus"
},
"program_status_since": {
"type": "string",
"format": "date-time"
},
"program_version": {
"$ref": "#/components/schemas/Version"
},
"refresh_version": {
"$ref": "#/components/schemas/Version"
},
"runtime_config": {
"allOf": [
{
"$ref": "#/components/schemas/RuntimeConfig"
}
],
"nullable": true
},
"storage_status": {
"$ref": "#/components/schemas/StorageStatus"
},
"udf_rust": {
"type": "string",
"nullable": true
},
"udf_toml": {
"type": "string",
"nullable": true
},
"version": {
"$ref": "#/components/schemas/Version"
}
}
},
"PipelineState": {
"type": "string",
"description": "Pipeline state.",
"enum": [
"Paused",
"Running",
"Terminated"
]
},
"PipelineTemplateConfig": {
"type": "object",
"description": "Configuration for supplying a custom pipeline StatefulSet template via a Kubernetes ConfigMap.\n\nOperators can provide a custom StatefulSet YAML that the Kubernetes runner will use when\ncreating pipeline StatefulSets for a pipeline. The custom template must be stored as the\nvalue of a key in a ConfigMap in the same namespace as the pipeline; set `name` to the\nConfigMap name and `key` to the entry that contains the template.\n\nRecommendations and requirements:\n- **Start from the default template and modify it as needed.** The default template is present\nin ConfigMap named as `<release-name>-pipeline-template`, with key `pipelineTemplate` in the release\nnamespace and should be used as a reference.\n- The template must contain a valid Kubernetes `StatefulSet` manifest in YAML form. The\nrunner substitutes variables in the template before parsing; therefore the final YAML\nmust be syntactically valid.\n- The runner performs simple string substitution for the following placeholders. Please ensure these\nplaceholders are placed at appropriate location for their semantics:\n- `{id}`: pipeline Kubernetes name (used for object names and labels)\n- `{namespace}`: Kubernetes namespace where the pipeline runs\n- `{pipeline_executor_image}`: container image used to run the pipeline executor\n- `{binary_ref}`: program binary reference passed as an argument\n- `{program_info_ref}`: program info reference passed as an argument\n- `{pipeline_storage_path}`: mount path for persistent pipeline storage\n- `{storage_class_name}`: storage class name to use for PVCs (if applicable)\n- `{deployment_id}`: UUID identifying the deployment instance\n- `{deployment_initial}`: initial desired runtime status (e.g., `provisioning`)\n- `{bootstrap_policy}`: bootstrap policy value when applicable",
"required": [
"name"
],
"properties": {
"key": {
"type": "string",
"description": "Key in the ConfigMap containing the pipeline template.\n\nIf not set, defaults to `pipelineTemplate`.",
"default": "pipelineTemplate"
},
"name": {
"type": "string",
"description": "Name of the ConfigMap containing the pipeline template."
}
}
},
"PostPutPipeline": {
"type": "object",
"description": "Create a new pipeline (POST), or fully update an existing pipeline (PUT).\nFields which are optional and not provided will be set to their empty type value\n(for strings: an empty string `\"\"`, for objects: an empty dictionary `{}`).",
"required": [
"name",
"program_code"
],
"properties": {
"description": {
"type": "string",
"nullable": true
},
"name": {
"type": "string"
},
"program_code": {
"type": "string"
},
"program_config": {
"allOf": [
{
"$ref": "#/components/schemas/ProgramConfig"
}
],
"nullable": true
},
"runtime_config": {
"allOf": [
{
"$ref": "#/components/schemas/RuntimeConfig"
}
],
"nullable": true
},
"udf_rust": {
"type": "string",
"nullable": true
},
"udf_toml": {
"type": "string",
"nullable": true
}
}
},
"PostStopPipelineParameters": {
"type": "object",
"description": "Query parameters to POST a pipeline stop.",
"properties": {
"force": {
"type": "boolean",
"description": "The `force` parameter determines whether to immediately deprovision the pipeline compute\nresources (`force=true`) or first attempt to atomically checkpoint before doing so\n(`force=false`, which is the default)."
}
}
},
"PostgresReaderConfig": {
"type": "object",
"description": "Postgres input connector configuration.",
"required": [
"uri",
"query"
],
"properties": {
"query": {
"type": "string",
"description": "Query that specifies what data to fetch from postgres."
},
"uri": {
"type": "string",
"description": "Postgres URI.\nSee: <https://docs.rs/tokio-postgres/0.7.12/tokio_postgres/config/struct.Config.html>"
}
}
},
"PostgresWriteMode": {
"type": "string",
"description": "PostgreSQL write mode.\n\nDetermines how the PostgreSQL output connector writes data to the target table.",
"enum": [
"materialized",
"cdc"
]
},
"PostgresWriterConfig": {
"type": "object",
"description": "Postgres output connector configuration.",
"required": [
"uri",
"table"
],
"properties": {
"cdc_op_column": {
"type": "string",
"description": "Name of the operation metadata column in CDC mode.\n\nOnly used when `mode = \"cdc\"`. This column will contain:\n- `\"i\"` for insert operations\n- `\"u\"` for upsert operations\n- `\"d\"` for delete operations\n\nDefault: `\"__feldera_op\"`",
"default": "__feldera_op"
},
"cdc_ts_column": {
"type": "string",
"description": "Name of the timestamp metadata column in CDC mode.\n\nOnly used when `mode = \"cdc\"`. This column will contain the timestamp\n(in RFC 3339 format) when the batch of updates was output\nby the pipeline.\n\nDefault: `\"__feldera_ts\"`",
"default": "__feldera_ts"
},
"max_buffer_size_bytes": {
"type": "integer",
"description": "The maximum buffer size in for a single operation.\nNote that the buffers of `INSERT`, `UPDATE` and `DELETE` queries are\nseparate.\nDefault: 1 MiB",
"default": 1048576,
"minimum": 0
},
"max_records_in_buffer": {
"type": "integer",
"description": "The maximum number of records in a single buffer.",
"nullable": true,
"minimum": 0
},
"mode": {
"allOf": [
{
"$ref": "#/components/schemas/PostgresWriteMode"
}
],
"default": "materialized"
},
"on_conflict_do_nothing": {
"type": "boolean",
"description": "Specifies how the connector handles conflicts when executing an `INSERT`\ninto a table with a primary key. By default, an existing row with the same\nkey is overwritten. Setting this flag to `true` preserves the existing row\nand ignores the new insert.\n\nThis setting does not affect `UPDATE` statements, which always replace the\nvalue associated with the key.\n\nThis setting is not supported when `mode = \"cdc\"`, since all operations\nare performed as append-only `INSERT`s into the target table.\nAny conflict in CDC mode will result in an error.\n\nDefault: `false`"
},
"ssl_ca_location": {
"type": "string",
"description": "Path to a file containing a sequence of CA certificates in PEM format.",
"nullable": true
},
"ssl_ca_pem": {
"type": "string",
"description": "A sequence of CA certificates in PEM format.",
"nullable": true
},
"ssl_certificate_chain_location": {
"type": "string",
"description": "The path to the certificate chain file.\nThe file must contain a sequence of PEM-formatted certificates,\nthe first being the leaf certificate, and the remainder forming\nthe chain of certificates up to and including the trusted root certificate.",
"nullable": true
},
"ssl_client_key": {
"type": "string",
"description": "The client certificate key in PEM format.",
"nullable": true
},
"ssl_client_key_location": {
"type": "string",
"description": "Path to the client certificate key.",
"nullable": true
},
"ssl_client_location": {
"type": "string",
"description": "Path to the client certificate.",
"nullable": true
},
"ssl_client_pem": {
"type": "string",
"description": "The client certificate in PEM format.",
"nullable": true
},
"table": {
"type": "string",
"description": "The table to write the output to."
},
"uri": {
"type": "string",
"description": "Postgres URI.\nSee: <https://docs.rs/tokio-postgres/0.7.12/tokio_postgres/config/struct.Config.html>"
},
"verify_hostname": {
"type": "boolean",
"description": "True to enable hostname verification when using TLS. True by default.",
"nullable": true
}
}
},
"ProgramConfig": {
"type": "object",
"description": "Program configuration.",
"properties": {
"cache": {
"type": "boolean",
"description": "If `true` (default), when a prior compilation with the same checksum\nalready exists, the output of that (i.e., binary) is used.\nSet `false` to always trigger a new compilation, which might take longer\nand as well can result in overriding an existing binary.",
"default": true
},
"profile": {
"allOf": [
{
"$ref": "#/components/schemas/CompilationProfile"
}
],
"default": null,
"nullable": true
},
"runtime_version": {
"type": "string",
"description": "Override runtime version of the pipeline being executed.\n\nWarning: This setting is experimental and may change in the future.\nRequires the platform to run with the unstable feature `runtime_version`\nenabled. Should only be used for testing purposes, and requires\nnetwork access.\n\nA runtime version can be specified in the form of a version\nor SHA taken from the `feldera/feldera` repository main branch.\n\nExamples: `v0.96.0` or `f4dcac0989ca0fda7d2eb93602a49d007cb3b0ae`\n\nA platform of version `0.x.y` may be capable of running future and past\nruntimes with versions `>=0.x.y` and `<=0.x.y` until breaking API changes happen,\nthe exact bounds for each platform version are unspecified until we reach a\nstable version. Compatibility is only guaranteed if platform and runtime version\nare exact matches.\n\nNote that any enterprise features are currently considered to be part of\nthe platform.\n\nIf not set (null), the runtime version will be the same as the platform version.",
"default": null,
"nullable": true
}
}
},
"ProgramDiff": {
"type": "object",
"description": "Summary of changes in the program between checkpointed and new versions.",
"required": [
"added_tables",
"removed_tables",
"modified_tables",
"added_views",
"removed_views",
"modified_views"
],
"properties": {
"added_tables": {
"type": "array",
"items": {
"type": "string"
}
},
"added_views": {
"type": "array",
"items": {
"type": "string"
}
},
"modified_tables": {
"type": "array",
"items": {
"type": "string"
}
},
"modified_views": {
"type": "array",
"items": {
"type": "string"
}
},
"removed_tables": {
"type": "array",
"items": {
"type": "string"
}
},
"removed_views": {
"type": "array",
"items": {
"type": "string"
}
}
}
},
"ProgramError": {
"type": "object",
"description": "Log, warning and error information about the program compilation.",
"properties": {
"rust_compilation": {
"allOf": [
{
"$ref": "#/components/schemas/RustCompilationInfo"
}
],
"nullable": true
},
"sql_compilation": {
"allOf": [
{
"$ref": "#/components/schemas/SqlCompilationInfo"
}
],
"nullable": true
},
"system_error": {
"type": "string",
"description": "System error that occurred.\n- Set `Some(...)` upon transition to `SystemError`\n- Set `None` upon transition to `Pending`",
"nullable": true
}
}
},
"ProgramInfo": {
"type": "object",
"description": "Program information is the output of the SQL compiler.\n\nIt includes information needed for Rust compilation (e.g., generated Rust code)\nas well as only for runtime (e.g., schema, input/output connectors).",
"required": [
"schema",
"input_connectors",
"output_connectors"
],
"properties": {
"dataflow": {
"allOf": [
{
"$ref": "#/components/schemas/Dataflow"
}
],
"nullable": true
},
"input_connectors": {
"type": "object",
"description": "Input connectors derived from the schema.",
"additionalProperties": {
"$ref": "#/components/schemas/InputEndpointConfig"
}
},
"main_rust": {
"type": "string",
"description": "Generated main program Rust code: main.rs"
},
"output_connectors": {
"type": "object",
"description": "Output connectors derived from the schema.",
"additionalProperties": {
"$ref": "#/components/schemas/OutputEndpointConfig"
}
},
"schema": {
"$ref": "#/components/schemas/ProgramSchema"
},
"udf_stubs": {
"type": "string",
"description": "Generated user defined function (UDF) stubs Rust code: stubs.rs"
}
}
},
"ProgramIr": {
"type": "object",
"description": "Program information included in the pipeline configuration.",
"required": [
"mir",
"program_schema"
],
"properties": {
"mir": {
"type": "object",
"description": "The MIR of the program.",
"additionalProperties": {
"$ref": "#/components/schemas/MirNode"
}
},
"program_schema": {
"$ref": "#/components/schemas/ProgramSchema"
}
}
},
"ProgramSchema": {
"type": "object",
"description": "A struct containing the tables (inputs) and views for a program.\n\nParse from the JSON data-type of the DDL generated by the SQL compiler.",
"required": [
"inputs",
"outputs"
],
"properties": {
"inputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Relation"
}
},
"outputs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Relation"
}
}
}
},
"ProgramStatus": {
"type": "string",
"description": "Program compilation status.",
"enum": [
"Pending",
"CompilingSql",
"SqlCompiled",
"CompilingRust",
"Success",
"SqlError",
"RustError",
"SystemError"
]
},
"PropertyValue": {
"type": "object",
"required": [
"value",
"key_position",
"value_position"
],
"properties": {
"key_position": {
"$ref": "#/components/schemas/SourcePosition"
},
"value": {
"type": "string"
},
"value_position": {
"$ref": "#/components/schemas/SourcePosition"
}
}
},
"ProviderAwsCognito": {
"type": "object",
"required": [
"issuer",
"login_url",
"logout_url"
],
"properties": {
"issuer": {
"type": "string"
},
"login_url": {
"type": "string"
},
"logout_url": {
"type": "string"
}
}
},
"ProviderGenericOidc": {
"type": "object",
"required": [
"issuer",
"client_id",
"extra_oidc_scopes"
],
"properties": {
"client_id": {
"type": "string"
},
"extra_oidc_scopes": {
"type": "array",
"items": {
"type": "string"
}
},
"issuer": {
"type": "string"
}
}
},
"PubSubInputConfig": {
"type": "object",
"description": "Google Pub/Sub input connector configuration.",
"required": [
"subscription"
],
"properties": {
"connect_timeout_seconds": {
"type": "integer",
"format": "int32",
"description": "gRPC connection timeout.",
"nullable": true,
"minimum": 0
},
"credentials": {
"type": "string",
"description": "The content of a Google Cloud credentials JSON file.\n\nWhen this option is specified, the connector will use the provided credentials for\nauthentication. Otherwise, it will use Application Default Credentials (ADC) configured\nin the environment where the Feldera service is running. See\n[Google Cloud documentation](https://cloud.google.com/docs/authentication/provide-credentials-adc)\nfor information on configuring application default credentials.\n\nWhen running Feldera in an environment where ADC are not configured,\ne.g., a Docker container, use this option to ship Google Cloud credentials from another environment.\nFor example, if you use the\n[`gcloud auth application-default login`](https://cloud.google.com/pubsub/docs/authentication#client-libs)\ncommand for authentication in your local development environment, ADC are stored in the\n`.config/gcloud/application_default_credentials.json` file in your home directory.",
"nullable": true
},
"emulator": {
"type": "string",
"description": "Set in order to use a Pub/Sub [emulator](https://cloud.google.com/pubsub/docs/emulator)\ninstead of the production service, e.g., 'localhost:8681'.",
"nullable": true
},
"endpoint": {
"type": "string",
"description": "Override the default service endpoint 'pubsub.googleapis.com'",
"nullable": true
},
"pool_size": {
"type": "integer",
"format": "int32",
"description": "gRPC channel pool size.",
"nullable": true,
"minimum": 0
},
"project_id": {
"type": "string",
"description": "Google Cloud project_id.\n\nWhen not specified, the connector will use the project id associated\nwith the authenticated account.",
"nullable": true
},
"snapshot": {
"type": "string",
"description": "Reset subscription's backlog to a given snapshot on startup,\nusing the Pub/Sub `Seek` API.\n\nThis option is mutually exclusive with the `timestamp` option.",
"nullable": true
},
"subscription": {
"type": "string",
"description": "Subscription name."
},
"timeout_seconds": {
"type": "integer",
"format": "int32",
"description": "gRPC request timeout.",
"nullable": true,
"minimum": 0
},
"timestamp": {
"type": "string",
"description": "Reset subscription's backlog to a given timestamp on startup,\nusing the Pub/Sub `Seek` API.\n\nThe value of this option is an ISO 8601-encoded UTC time, e.g., \"2024-08-17T16:39:57-08:00\".\n\nThis option is mutually exclusive with the `snapshot` option.",
"nullable": true
}
}
},
"RedisOutputConfig": {
"type": "object",
"description": "Redis output connector configuration.",
"required": [
"connection_string"
],
"properties": {
"connection_string": {
"type": "string",
"description": "The URL format: `redis://[<username>][:<password>@]<hostname>[:port][/[<db>][?protocol=<protocol>]]`\nThis is parsed by the [redis](https://docs.rs/redis/latest/redis/#connection-parameters) crate."
},
"key_separator": {
"type": "string",
"description": "Separator used to join multiple components into a single key.\n\":\" by default."
}
}
},
"Rel": {
"type": "object",
"required": [
"id",
"relOp"
],
"properties": {
"aggs": {
"type": "array",
"items": {},
"nullable": true
},
"all": {
"type": "boolean",
"nullable": true
},
"condition": {
"allOf": [
{
"$ref": "#/components/schemas/Condition"
}
],
"nullable": true
},
"exprs": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Operand"
},
"nullable": true
},
"fields": {
"type": "array",
"items": {
"type": "string"
},
"nullable": true
},
"group": {
"type": "array",
"items": {
"type": "integer",
"minimum": 0
},
"nullable": true
},
"id": {
"type": "integer",
"minimum": 0
},
"inputs": {
"type": "array",
"items": {
"type": "integer",
"minimum": 0
}
},
"joinType": {
"type": "string",
"nullable": true
},
"relOp": {
"type": "string"
},
"table": {
"type": "array",
"items": {
"type": "string"
},
"description": "This is a vector where the elements concatenated form a fully qualified table name.\n\ne.g., usually is of the form `[$namespace, $table] / [schema, table]`",
"nullable": true
}
},
"additionalProperties": {}
},
"Relation": {
"allOf": [
{
"$ref": "#/components/schemas/SqlIdentifier"
},
{
"type": "object",
"required": [
"fields"
],
"properties": {
"fields": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Field"
}
},
"materialized": {
"type": "boolean"
},
"properties": {
"type": "object",
"additionalProperties": {
"$ref": "#/components/schemas/PropertyValue"
}
}
}
}
],
"description": "A SQL table or view. It has a name and a list of fields.\n\nMatches the Calcite JSON format."
},
"ReplayPolicy": {
"type": "string",
"enum": [
"Instant",
"Original"
]
},
"ResourceConfig": {
"type": "object",
"properties": {
"cpu_cores_max": {
"type": "number",
"format": "double",
"description": "The maximum number of CPU cores to reserve\nfor an instance of this pipeline",
"default": null,
"nullable": true
},
"cpu_cores_min": {
"type": "number",
"format": "double",
"description": "The minimum number of CPU cores to reserve\nfor an instance of this pipeline",
"default": null,
"nullable": true
},
"memory_mb_max": {
"type": "integer",
"format": "int64",
"description": "The maximum memory in Megabytes to reserve\nfor an instance of this pipeline",
"default": null,
"nullable": true,
"minimum": 0
},
"memory_mb_min": {
"type": "integer",
"format": "int64",
"description": "The minimum memory in Megabytes to reserve\nfor an instance of this pipeline",
"default": null,
"nullable": true,
"minimum": 0
},
"namespace": {
"type": "string",
"description": "Kubernetes namespace to use for an instance of this pipeline.\nThe namespace determines the scope of names for resources created\nfor the pipeline.\nIf not set, the pipeline will be deployed in the same namespace\nas the control-plane.",
"default": null,
"nullable": true
},
"service_account_name": {
"type": "string",
"description": "Kubernetes service account name to use for an instance of this pipeline.\nThe account determines permissions and access controls.",
"default": null,
"nullable": true
},
"storage_class": {
"type": "string",
"description": "Storage class to use for an instance of this pipeline.\nThe class determines storage performance such as IOPS and throughput.",
"default": null,
"nullable": true
},
"storage_mb_max": {
"type": "integer",
"format": "int64",
"description": "The total storage in Megabytes to reserve\nfor an instance of this pipeline",
"default": null,
"nullable": true,
"minimum": 0
}
}
},
"ResourcesDesiredStatus": {
"type": "string",
"enum": [
"Stopped",
"Provisioned"
]
},
"ResourcesStatus": {
"type": "string",
"description": "Pipeline resources status.\n\n```text\n/start (early start failed)\n┌───────────────────┐\n│ ▼\nStopped ◄────────── Stopping\n/start │ ▲\n│ │ /stop?force=true\n│ │ OR: timeout (from Provisioning)\n▼ │ OR: fatal runtime or resource error\n⌛Provisioning ────────────│ OR: runtime status is Suspended\n│ │\n│ │\n▼ │\nProvisioned ─────────────┘\n```\n\n### Desired and actual status\n\nWe use the desired state model to manage the lifecycle of a pipeline. In this model, the\npipeline has two status attributes associated with it: the **desired** status, which represents\nwhat the user would like the pipeline to do, and the **current** status, which represents the\nactual (last observed) status of the pipeline. The pipeline runner service continuously monitors\nthe desired status field to decide where to steer the pipeline towards.\n\nThere are two desired statuses:\n- `Provisioned` (set by invoking `/start`)\n- `Stopped` (set by invoking `/stop?force=true`)\n\nThe user can monitor the current status of the pipeline via the `GET /v0/pipelines/{name}`\nendpoint. In a typical scenario, the user first sets the desired status, e.g., by invoking the\n`/start` endpoint, and then polls the `GET /v0/pipelines/{name}` endpoint to monitor the actual\nstatus of the pipeline until its `deployment_resources_status` attribute changes to\n`Provisioned` indicating that the pipeline has been successfully provisioned, or `Stopped` with\n`deployment_error` being set.",
"enum": [
"Stopped",
"Provisioning",
"Provisioned",
"Stopping"
]
},
"RestCatalogConfig": {
"type": "object",
"description": "Iceberg REST catalog config.",
"properties": {
"rest.audience": {
"type": "string",
"description": "Logical name of target resource or service.",
"nullable": true
},
"rest.credential": {
"type": "string",
"description": "Credential to use for OAuth2 credential flow when initializing the catalog.\n\nA key and secret pair separated by \":\" (key is optional).",
"nullable": true
},
"rest.headers": {
"type": "array",
"items": {
"type": "array",
"items": {
"allOf": [
{
"type": "string"
},
{
"type": "string"
}
]
}
},
"description": "Additional HTTP request headers added to each catalog REST API call.",
"nullable": true
},
"rest.oauth2-server-uri": {
"type": "string",
"description": "Authentication URL to use for client credentials authentication (default: uri + 'v1/oauth/tokens')",
"nullable": true
},
"rest.prefix": {
"type": "string",
"description": "Customize table storage paths.\n\nWhen combined with the `warehouse` property, the prefix determines\nhow table data is organized within the storage.",
"nullable": true
},
"rest.resource": {
"type": "string",
"description": "URI for the target resource or service.",
"nullable": true
},
"rest.scope": {
"type": "string",
"nullable": true
},
"rest.token": {
"type": "string",
"description": "Bearer token value to use for `Authorization` header.",
"nullable": true
},
"rest.uri": {
"type": "string",
"description": "URI identifying the REST catalog server.",
"nullable": true
},
"rest.warehouse": {
"type": "string",
"description": "The default location for managed tables created by the catalog.",
"nullable": true
}
}
},
"RngFieldSettings": {
"type": "object",
"description": "Configuration for generating random data for a field of a table.",
"properties": {
"e": {
"type": "integer",
"format": "int64",
"description": "The frequency rank exponent for the Zipf distribution.\n\n- This value is only used if the strategy is set to `Zipf`.\n- The default value is 1.0.",
"default": 1
},
"fields": {
"type": "object",
"description": "Specifies the values that the generator should produce in case the field is a struct type.",
"default": null,
"additionalProperties": {
"$ref": "#/components/schemas/RngFieldSettings"
},
"nullable": true
},
"key": {
"allOf": [
{
"$ref": "#/components/schemas/RngFieldSettings"
}
],
"default": null,
"nullable": true
},
"null_percentage": {
"type": "integer",
"description": "Percentage of records where this field should be set to NULL.\n\nIf not set, the generator will produce only records with non-NULL values.\nIf set to `1..=100`, the generator will produce records with NULL values with the specified percentage.",
"default": null,
"nullable": true,
"minimum": 0
},
"range": {
"type": "object",
"description": "An optional, exclusive range [a, b) to limit the range of values the generator should produce.\n\n- For integer/floating point types specifies min/max values as an integer.\nIf not set, the generator will produce values for the entire range of the type for number types.\n- For string/binary types specifies min/max length as an integer, values are required to be >=0.\nIf not set, a range of [0, 25) is used by default.\n- For timestamp types specifies the min/max as two strings in the RFC 3339 format\n(e.g., [\"2021-01-01T00:00:00Z\", \"2022-01-02T00:00:00Z\"]).\nAlternatively, the range values can be specified as a number of non-leap\nmilliseconds since January 1, 1970 0:00:00.000 UTC (aka “UNIX timestamp”).\nIf not set, a range of [\"1970-01-01T00:00:00Z\", \"2100-01-01T00:00:00Z\") or [0, 4102444800000)\nis used by default.\n- For time types specifies the min/max as two strings in the \"HH:MM:SS\" format.\nAlternatively, the range values can be specified in milliseconds as two positive integers.\nIf not set, the range is 24h.\n- For date types, the min/max range is specified as two strings in the \"YYYY-MM-DD\" format.\nAlternatively, two integers that represent number of days since January 1, 1970 can be used.\nIf not set, a range of [\"1970-01-01\", \"2100-01-01\") or [0, 54787) is used by default.\n- For array types specifies the min/max number of elements as an integer.\nIf not set, a range of [0, 5) is used by default. Range values are required to be >=0.\n- For map types specifies the min/max number of key-value pairs as an integer.\nIf not set, a range of [0, 5) is used by default.\n- For struct/boolean/null types `range` is ignored."
},
"scale": {
"type": "integer",
"format": "int64",
"description": "A scale factor to apply a multiplier to the generated value.\n\n- For integer/floating point types, the value is multiplied by the scale factor.\n- For timestamp types, the generated value (milliseconds) is multiplied by the scale factor.\n- For time types, the generated value (milliseconds) is multiplied by the scale factor.\n- For date types, the generated value (days) is multiplied by the scale factor.\n- For string/binary/array/map/struct/boolean/null types, the scale factor is ignored.\n\n- If `values` is specified, the scale factor is ignored.\n- If `range` is specified and the range is required to be positive (struct, map, array etc.)\nthe scale factor is required to be positive too.\n\nThe default scale factor is 1.",
"default": 1
},
"strategy": {
"allOf": [
{
"$ref": "#/components/schemas/DatagenStrategy"
}
],
"default": "increment"
},
"value": {
"allOf": [
{
"$ref": "#/components/schemas/RngFieldSettings"
}
],
"default": null,
"nullable": true
},
"values": {
"type": "array",
"items": {
"type": "object"
},
"description": "An optional set of values the generator will pick from.\n\nIf set, the generator will pick values from the specified set.\nIf not set, the generator will produce values according to the specified range.\nIf set to an empty set, the generator will produce NULL values.\nIf set to a single value, the generator will produce only that value.\n\nNote that `range` is ignored if `values` is set.",
"default": null,
"nullable": true
}
},
"additionalProperties": false
},
"RuntimeConfig": {
"type": "object",
"description": "Global pipeline configuration settings. This is the publicly\nexposed type for users to configure pipelines.",
"properties": {
"checkpoint_during_suspend": {
"type": "boolean",
"description": "Deprecated: setting this true or false does not have an effect anymore.",
"default": true
},
"clock_resolution_usecs": {
"type": "integer",
"format": "int64",
"description": "Real-time clock resolution in microseconds.\n\nThis parameter controls the execution of queries that use the `NOW()` function. The output of such\nqueries depends on the real-time clock and can change over time without any external\ninputs. If the query uses `NOW()`, the pipeline will update the clock value and trigger incremental\nrecomputation at most each `clock_resolution_usecs` microseconds. If the query does not use\n`NOW()`, then clock value updates are suppressed and the pipeline ignores this setting.\n\nIt is set to 1 second (1,000,000 microseconds) by default.",
"default": 1000000,
"nullable": true,
"minimum": 0
},
"cpu_profiler": {
"type": "boolean",
"description": "Enable CPU profiler.\n\nThe default value is `true`.",
"default": true
},
"dev_tweaks": {
"type": "object",
"description": "Optional settings for tweaking Feldera internals.\n\nThe available key-value pairs change from one version of Feldera to\nanother, so users should not depend on particular settings being\navailable, or on their behavior.",
"default": {},
"additionalProperties": {}
},
"fault_tolerance": {
"allOf": [
{
"$ref": "#/components/schemas/FtConfig"
}
],
"default": {
"model": "none",
"checkpoint_interval_secs": 60
}
},
"hosts": {
"type": "integer",
"description": "Number of DBSP hosts.\n\nThe worker threads are evenly divided among the hosts. For single-host\ndeployments, this should be 1 (the default).\n\nMultihost pipelines are an enterprise-only preview feature.",
"default": 1,
"minimum": 0
},
"http_workers": {
"type": "integer",
"format": "int64",
"description": "Sets the number of available runtime threads for the http server.\n\nIn most cases, this does not need to be set explicitly and\nthe default is sufficient. Can be increased in case the\npipeline HTTP API operations are a bottleneck.\n\nIf not specified, the default is set to `workers`.",
"default": null,
"nullable": true,
"minimum": 0
},
"init_containers": {
"description": "Specification of additional (sidecar) containers.",
"nullable": true
},
"io_workers": {
"type": "integer",
"format": "int64",
"description": "Sets the number of available runtime threads for async IO tasks.\n\nThis affects some networking and file I/O operations\nespecially adapters and ad-hoc queries.\n\nIn most cases, this does not need to be set explicitly and\nthe default is sufficient. Can be increased in case\ningress, egress or ad-hoc queries are a bottleneck.\n\nIf not specified, the default is set to `workers`.",
"default": null,
"nullable": true,
"minimum": 0
},
"logging": {
"type": "string",
"description": "Log filtering directives.\n\nIf set to a valid [tracing-subscriber] filter, this controls the log\nmessages emitted by the pipeline process. Otherwise, or if the filter\nhas invalid syntax, messages at \"info\" severity and higher are written\nto the log and all others are discarded.\n\n[tracing-subscriber]: https://docs.rs/tracing-subscriber/latest/tracing_subscriber/filter/struct.EnvFilter.html#directives",
"default": null,
"nullable": true
},
"max_buffering_delay_usecs": {
"type": "integer",
"format": "int64",
"description": "Maximal delay in microseconds to wait for `min_batch_size_records` to\nget buffered by the controller, defaults to 0.",
"default": 0,
"minimum": 0
},
"max_parallel_connector_init": {
"type": "integer",
"format": "int64",
"description": "The maximum number of connectors initialized in parallel during pipeline\nstartup.\n\nAt startup, the pipeline must initialize all of its input and output connectors.\nDepending on the number and types of connectors, this can take a long time.\nTo accelerate the process, multiple connectors are initialized concurrently.\nThis option controls the maximum number of connectors that can be initialized\nin parallel.\n\nThe default is 10.",
"default": null,
"nullable": true,
"minimum": 0
},
"min_batch_size_records": {
"type": "integer",
"format": "int64",
"description": "Minimal input batch size.\n\nThe controller delays pushing input records to the circuit until at\nleast `min_batch_size_records` records have been received (total\nacross all endpoints) or `max_buffering_delay_usecs` microseconds\nhave passed since at least one input records has been buffered.\nDefaults to 0.",
"default": 0,
"minimum": 0
},
"pin_cpus": {
"type": "array",
"items": {
"type": "integer",
"minimum": 0
},
"description": "Optionally, a list of CPU numbers for CPUs to which the pipeline may pin\nits worker threads. Specify at least twice as many CPU numbers as\nworkers. CPUs are generally numbered starting from 0. The pipeline\nmight not be able to honor CPU pinning requests.\n\nCPU pinning can make pipelines run faster and perform more consistently,\nas long as different pipelines running on the same machine are pinned to\ndifferent CPUs.",
"default": []
},
"pipeline_template_configmap": {
"allOf": [
{
"$ref": "#/components/schemas/PipelineTemplateConfig"
}
],
"default": null,
"nullable": true
},
"provisioning_timeout_secs": {
"type": "integer",
"format": "int64",
"description": "Timeout in seconds for the `Provisioning` phase of the pipeline.\nSetting this value will override the default of the runner.",
"default": null,
"nullable": true,
"minimum": 0
},
"resources": {
"allOf": [
{
"$ref": "#/components/schemas/ResourceConfig"
}
],
"default": {
"cpu_cores_min": null,
"cpu_cores_max": null,
"memory_mb_min": null,
"memory_mb_max": null,
"storage_mb_max": null,
"storage_class": null,
"service_account_name": null,
"namespace": null
}
},
"storage": {
"allOf": [
{
"$ref": "#/components/schemas/StorageOptions"
}
],
"default": {
"backend": {
"name": "default"
},
"min_storage_bytes": null,
"min_step_storage_bytes": null,
"compression": "default",
"cache_mib": null
},
"nullable": true
},
"tracing": {
"type": "boolean",
"description": "Enable pipeline tracing.",
"default": false
},
"tracing_endpoint_jaeger": {
"type": "string",
"description": "Jaeger tracing endpoint to send tracing information to.",
"default": "127.0.0.1:6831"
},
"workers": {
"type": "integer",
"format": "int32",
"description": "Number of DBSP worker threads.\n\nEach DBSP \"foreground\" worker thread is paired with a \"background\"\nthread for LSM merging, making the total number of threads twice the\nspecified number.\n\nThe typical sweet spot for the number of workers is between 4 and 16.\nEach worker increases overall memory consumption for data structures\nused during a step.",
"default": 8,
"minimum": 0
}
}
},
"RuntimeDesiredStatus": {
"type": "string",
"enum": [
"Unavailable",
"Coordination",
"Standby",
"Paused",
"Running",
"Suspended"
]
},
"RuntimeStatus": {
"type": "string",
"description": "Runtime status of the pipeline.\n\nOf the statuses, only `Unavailable` is determined by the runner. All other statuses are\ndetermined by the pipeline and taken over by the runner.",
"enum": [
"Unavailable",
"Coordination",
"Standby",
"Initializing",
"AwaitingApproval",
"Bootstrapping",
"Replaying",
"Paused",
"Running",
"Suspended"
]
},
"RustCompilationInfo": {
"type": "object",
"description": "Rust compilation information.",
"required": [
"exit_code",
"stdout",
"stderr"
],
"properties": {
"exit_code": {
"type": "integer",
"format": "int32",
"description": "Exit code of the `cargo` compilation command."
},
"stderr": {
"type": "string",
"description": "Output printed to stderr by the `cargo` compilation command."
},
"stdout": {
"type": "string",
"description": "Output printed to stdout by the `cargo` compilation command."
}
}
},
"S3InputConfig": {
"type": "object",
"description": "Configuration for reading data from AWS S3.",
"required": [
"region",
"bucket_name"
],
"properties": {
"aws_access_key_id": {
"type": "string",
"description": "AWS Access Key id. This property must be specified unless `no_sign_request` is set to `true`.",
"nullable": true
},
"aws_secret_access_key": {
"type": "string",
"description": "Secret Access Key. This property must be specified unless `no_sign_request` is set to `true`.",
"nullable": true
},
"bucket_name": {
"type": "string",
"description": "S3 bucket name to access."
},
"endpoint_url": {
"type": "string",
"description": "The endpoint URL used to communicate with this service. Can be used to make this connector\ntalk to non-AWS services with an S3 API.",
"nullable": true
},
"key": {
"type": "string",
"description": "Read a single object specified by a key.",
"nullable": true
},
"max_concurrent_fetches": {
"type": "integer",
"format": "int32",
"description": "Controls the number of S3 objects fetched in parallel.\n\nIncreasing this value can improve throughput by enabling greater concurrency.\nHowever, higher concurrency may lead to timeouts or increased memory usage due to in-memory buffering.\n\nRecommended range: 1–10. Default: 8.",
"minimum": 0
},
"max_retries": {
"type": "integer",
"format": "int32",
"description": "Retry `max_retries` times with exponential backoff.\nIf the object changes during a retry attempt, the remaining part of the object will not be processed.\nRecommended range: 3-10. Default: 5.",
"minimum": 0
},
"no_sign_request": {
"type": "boolean",
"description": "Do not sign requests. This is equivalent to the `--no-sign-request` flag in the AWS CLI."
},
"prefix": {
"type": "string",
"description": "Read all objects whose keys match a prefix. Set to an empty string to read all objects in the bucket.",
"nullable": true
},
"region": {
"type": "string",
"description": "AWS region."
}
}
},
"SampleStatistics": {
"type": "object",
"description": "One sample of time-series data.",
"required": [
"t",
"r",
"m",
"s"
],
"properties": {
"m": {
"type": "integer",
"format": "int64",
"description": "Memory usage in bytes.",
"minimum": 0
},
"r": {
"type": "integer",
"format": "int64",
"description": "Records processed.",
"minimum": 0
},
"s": {
"type": "integer",
"format": "int64",
"description": "Storage usage in bytes.",
"minimum": 0
},
"t": {
"type": "string",
"format": "date-time",
"description": "Sample time."
}
}
},
"ServiceStatus": {
"type": "object",
"required": [
"healthy",
"message",
"unchanged_since",
"checked_at"
],
"properties": {
"checked_at": {
"type": "string",
"format": "date-time"
},
"healthy": {
"type": "boolean"
},
"message": {
"type": "string"
},
"unchanged_since": {
"type": "string",
"format": "date-time"
}
}
},
"SessionInfo": {
"type": "object",
"required": [
"tenant_id",
"tenant_name"
],
"properties": {
"tenant_id": {
"$ref": "#/components/schemas/TenantId"
},
"tenant_name": {
"type": "string",
"description": "Current user's tenant name"
}
}
},
"ShortEndpointConfig": {
"type": "object",
"description": "Schema definition for endpoint config that only includes the stream field.",
"required": [
"stream"
],
"properties": {
"stream": {
"type": "string",
"description": "The name of the stream."
}
}
},
"SourcePosition": {
"type": "object",
"required": [
"start_line_number",
"start_column",
"end_line_number",
"end_column"
],
"properties": {
"end_column": {
"type": "integer",
"minimum": 0
},
"end_line_number": {
"type": "integer",
"minimum": 0
},
"start_column": {
"type": "integer",
"minimum": 0
},
"start_line_number": {
"type": "integer",
"minimum": 0
}
}
},
"SqlCompilationInfo": {
"type": "object",
"description": "SQL compilation information.",
"required": [
"exit_code",
"messages"
],
"properties": {
"exit_code": {
"type": "integer",
"format": "int32",
"description": "Exit code of the SQL compiler."
},
"messages": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SqlCompilerMessage"
},
"description": "Messages (warnings and errors) generated by the SQL compiler."
}
}
},
"SqlCompilerMessage": {
"type": "object",
"description": "A SQL compiler error.\n\nThe SQL compiler returns a list of errors in the following JSON format if\nit's invoked with the `-je` option.\n\n```ignore\n[ {\n\"start_line_number\" : 2,\n\"start_column\" : 4,\n\"end_line_number\" : 2,\n\"end_column\" : 8,\n\"warning\" : false,\n\"error_type\" : \"PRIMARY KEY cannot be nullable\",\n\"message\" : \"PRIMARY KEY column 'C' has type INTEGER, which is nullable\",\n\"snippet\" : \" 2| c INT PRIMARY KEY\\n ^^^^^\\n 3|);\\n\"\n} ]\n```",
"required": [
"start_line_number",
"start_column",
"end_line_number",
"end_column",
"warning",
"error_type",
"message"
],
"properties": {
"end_column": {
"type": "integer",
"minimum": 0
},
"end_line_number": {
"type": "integer",
"minimum": 0
},
"error_type": {
"type": "string"
},
"message": {
"type": "string"
},
"snippet": {
"type": "string",
"nullable": true
},
"start_column": {
"type": "integer",
"minimum": 0
},
"start_line_number": {
"type": "integer",
"minimum": 0
},
"warning": {
"type": "boolean"
}
}
},
"SqlIdentifier": {
"type": "object",
"description": "An SQL identifier.\n\nThis struct is used to represent SQL identifiers in a canonical form.\nWe store table names or field names as identifiers in the schema.",
"required": [
"name",
"case_sensitive"
],
"properties": {
"case_sensitive": {
"type": "boolean"
},
"name": {
"type": "string"
}
}
},
"SqlType": {
"oneOf": [
{
"type": "string",
"description": "SQL `BOOLEAN` type.",
"enum": [
"Boolean"
]
},
{
"type": "string",
"description": "SQL `TINYINT` type.",
"enum": [
"TinyInt"
]
},
{
"type": "string",
"description": "SQL `SMALLINT` or `INT2` type.",
"enum": [
"SmallInt"
]
},
{
"type": "string",
"description": "SQL `INTEGER`, `INT`, `SIGNED`, `INT4` type.",
"enum": [
"Int"
]
},
{
"type": "string",
"description": "SQL `BIGINT` or `INT64` type.",
"enum": [
"BigInt"
]
},
{
"type": "string",
"description": "SQL `TINYINT UNSIGNED` type.",
"enum": [
"UTinyInt"
]
},
{
"type": "string",
"description": "SQL `SMALLINT UNSIGNED` type.",
"enum": [
"USmallInt"
]
},
{
"type": "string",
"description": "SQL `UNSIGNED`, `INTEGER UNSIGNED`, `INT UNSIGNED` type.",
"enum": [
"UInt"
]
},
{
"type": "string",
"description": "SQL `BIGINT UNSIGNED` type.",
"enum": [
"UBigInt"
]
},
{
"type": "string",
"description": "SQL `REAL` or `FLOAT4` or `FLOAT32` type.",
"enum": [
"Real"
]
},
{
"type": "string",
"description": "SQL `DOUBLE` or `FLOAT8` or `FLOAT64` type.",
"enum": [
"Double"
]
},
{
"type": "string",
"description": "SQL `DECIMAL` or `DEC` or `NUMERIC` type.",
"enum": [
"Decimal"
]
},
{
"type": "string",
"description": "SQL `CHAR(n)` or `CHARACTER(n)` type.",
"enum": [
"Char"
]
},
{
"type": "string",
"description": "SQL `VARCHAR`, `CHARACTER VARYING`, `TEXT`, or `STRING` type.",
"enum": [
"Varchar"
]
},
{
"type": "string",
"description": "SQL `BINARY(n)` type.",
"enum": [
"Binary"
]
},
{
"type": "string",
"description": "SQL `VARBINARY` or `BYTEA` type.",
"enum": [
"Varbinary"
]
},
{
"type": "string",
"description": "SQL `TIME` type.",
"enum": [
"Time"
]
},
{
"type": "string",
"description": "SQL `DATE` type.",
"enum": [
"Date"
]
},
{
"type": "string",
"description": "SQL `TIMESTAMP` type.",
"enum": [
"Timestamp"
]
},
{
"type": "object",
"required": [
"Interval"
],
"properties": {
"Interval": {
"$ref": "#/components/schemas/IntervalUnit"
}
}
},
{
"type": "string",
"description": "SQL `ARRAY` type.",
"enum": [
"Array"
]
},
{
"type": "string",
"description": "A complex SQL struct type (`CREATE TYPE x ...`).",
"enum": [
"Struct"
]
},
{
"type": "string",
"description": "SQL `MAP` type.",
"enum": [
"Map"
]
},
{
"type": "string",
"description": "SQL `NULL` type.",
"enum": [
"Null"
]
},
{
"type": "string",
"description": "SQL `UUID` type.",
"enum": [
"Uuid"
]
},
{
"type": "string",
"description": "SQL `VARIANT` type.",
"enum": [
"Variant"
]
}
],
"description": "The available SQL types as specified in `CREATE` statements."
},
"StartFromCheckpoint": {
"oneOf": [
{
"type": "string",
"enum": [
"latest"
]
},
{
"type": "string",
"format": "uuid"
}
],
"nullable": true
},
"StartTransactionResponse": {
"type": "object",
"description": "Response to a `/start_transaction` request.",
"required": [
"transaction_id"
],
"properties": {
"transaction_id": {
"type": "integer",
"format": "int64"
}
}
},
"StorageBackendConfig": {
"oneOf": [
{
"type": "object",
"required": [
"name"
],
"properties": {
"name": {
"type": "string",
"enum": [
"default"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/FileBackendConfig"
},
"name": {
"type": "string",
"enum": [
"file"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/ObjectStorageConfig"
},
"name": {
"type": "string",
"enum": [
"object"
]
}
}
}
],
"description": "Backend storage configuration.",
"discriminator": {
"propertyName": "name"
}
},
"StorageCacheConfig": {
"type": "string",
"description": "How to cache access to storage within a Feldera pipeline.",
"enum": [
"page_cache",
"feldera_cache"
]
},
"StorageCompression": {
"type": "string",
"description": "Storage compression algorithm.",
"enum": [
"default",
"none",
"snappy"
]
},
"StorageConfig": {
"type": "object",
"description": "Configuration for persistent storage in a [`PipelineConfig`].",
"required": [
"path"
],
"properties": {
"cache": {
"$ref": "#/components/schemas/StorageCacheConfig"
},
"path": {
"type": "string",
"description": "A directory to keep pipeline state, as a path on the filesystem of the\nmachine or container where the pipeline will run.\n\nWhen storage is enabled, this directory stores the data for\n[StorageBackendConfig::Default].\n\nWhen fault tolerance is enabled, this directory stores checkpoints and\nthe log."
}
}
},
"StorageOptions": {
"type": "object",
"description": "Storage configuration for a pipeline.",
"properties": {
"backend": {
"allOf": [
{
"$ref": "#/components/schemas/StorageBackendConfig"
}
],
"default": {
"name": "default"
}
},
"cache_mib": {
"type": "integer",
"description": "The maximum size of the in-memory storage cache, in MiB.\n\nIf set, the specified cache size is spread across all the foreground and\nbackground threads. If unset, each foreground or background thread cache\nis limited to 256 MiB.",
"default": null,
"nullable": true,
"minimum": 0
},
"compression": {
"allOf": [
{
"$ref": "#/components/schemas/StorageCompression"
}
],
"default": "default"
},
"min_step_storage_bytes": {
"type": "integer",
"description": "For a batch of data passed through the pipeline during a single step,\nthe minimum estimated number of bytes to write it to storage.\n\nThis is provided for debugging and fine-tuning and should ordinarily be\nleft unset. A value of 0 will write even empty batches to storage, and\nnonzero values provide a threshold. `usize::MAX`, the default,\neffectively disables storage for such batches. If it is set to another\nvalue, it should ordinarily be greater than or equal to\n`min_storage_bytes`.",
"default": null,
"nullable": true,
"minimum": 0
},
"min_storage_bytes": {
"type": "integer",
"description": "For a batch of data maintained as part of a persistent index during a\npipeline run, the minimum estimated number of bytes to write it to\nstorage.\n\nThis is provided for debugging and fine-tuning and should ordinarily be\nleft unset.\n\nA value of 0 will write even empty batches to storage, and nonzero\nvalues provide a threshold. `usize::MAX` would effectively disable\nstorage for such batches. The default is 10,048,576 (10 MiB).",
"default": null,
"nullable": true,
"minimum": 0
}
}
},
"StorageStatus": {
"type": "string",
"description": "Storage status.\n\nThe storage status can only transition when the resources status is `Stopped`.\n\n```text\nCleared ───┐\n▲ │\n/clear │ │\n│ │\nClearing │\n▲ │\n│ │\nInUse ◄───┘\n```",
"enum": [
"Cleared",
"InUse",
"Clearing"
]
},
"SuspendError": {
"oneOf": [
{
"type": "object",
"required": [
"Permanent"
],
"properties": {
"Permanent": {
"type": "array",
"items": {
"$ref": "#/components/schemas/PermanentSuspendError"
},
"description": "Pipeline does not support suspend-and-resume.\n\nThese reasons only change if the pipeline's configuration changes, e.g.\nif a pipeline has an input connector that does not support\nsuspend-and-resume, and then that input connector is removed."
}
}
},
{
"type": "object",
"required": [
"Temporary"
],
"properties": {
"Temporary": {
"type": "array",
"items": {
"$ref": "#/components/schemas/TemporarySuspendError"
},
"description": "Pipeline supports suspend-and-resume, but a suspend requested now will\nbe delayed."
}
}
}
],
"description": "Whether a pipeline supports checkpointing and suspend-and-resume."
},
"SyncConfig": {
"type": "object",
"required": [
"bucket"
],
"properties": {
"access_key": {
"type": "string",
"description": "The access key used to authenticate with the storage provider.\n\nIf not provided, rclone will fall back to environment-based credentials, such as\n`RCLONE_S3_ACCESS_KEY_ID`. In Kubernetes environments using IRSA (IAM Roles for Service Accounts),\nthis can be left empty to allow automatic authentication via the pod's service account.",
"nullable": true
},
"bucket": {
"type": "string",
"description": "The name of the storage bucket.\n\nThis may include a path to a folder inside the bucket (e.g., `my-bucket/data`)."
},
"checkers": {
"type": "integer",
"format": "int32",
"description": "The number of checkers to run in parallel.\nDefault: 20",
"nullable": true,
"minimum": 0
},
"endpoint": {
"type": "string",
"description": "The endpoint URL for the storage service.\n\nThis is typically required for custom or local S3-compatible storage providers like MinIO.\nExample: `http://localhost:9000`\n\nRelevant rclone config key: [`endpoint`](https://rclone.org/s3/#s3-endpoint)",
"nullable": true
},
"fail_if_no_checkpoint": {
"type": "boolean",
"description": "When true, the pipeline will fail to initialize if fetching the\nspecified checkpoint fails (missing, download error).\nWhen false, the pipeline will start from scratch instead.\n\nFalse by default.",
"default": false
},
"flags": {
"type": "array",
"items": {
"type": "string"
},
"description": "Extra flags to pass to `rclone`.\n\nWARNING: Supplying incorrect or conflicting flags can break `rclone`.\nUse with caution.\n\nRefer to the docs to see the supported flags:\n- [Global flags](https://rclone.org/flags/)\n- [S3 specific flags](https://rclone.org/s3/)",
"nullable": true
},
"ignore_checksum": {
"type": "boolean",
"description": "Set to skip post copy check of checksums, and only check the file sizes.\nThis can significantly improve the throughput.\nDefualt: false",
"nullable": true
},
"multi_thread_cutoff": {
"type": "string",
"description": "Use multi-thread download for files above this size.\nFormat: `[size][Suffix]` (Example: 1G, 500M)\nSupported suffixes: k|M|G|T\nDefault: 100M",
"nullable": true
},
"multi_thread_streams": {
"type": "integer",
"format": "int32",
"description": "Number of streams to use for multi-thread downloads.\nDefault: 10",
"nullable": true,
"minimum": 0
},
"provider": {
"type": "string",
"description": "The name of the cloud storage provider (e.g., `\"AWS\"`, `\"Minio\"`).\n\nUsed for provider-specific behavior in rclone.\nIf omitted, defaults to `\"Other\"`.\n\nSee [rclone S3 provider documentation](https://rclone.org/s3/#s3-provider)",
"nullable": true
},
"pull_interval": {
"type": "integer",
"format": "int64",
"description": "The interval (in seconds) between each attempt to fetch the latest\ncheckpoint from object store while in standby mode.\n\nApplies only when `start_from_checkpoint` is set to `latest`.\n\nDefault: 10 seconds",
"default": 10,
"minimum": 0
},
"push_interval": {
"type": "integer",
"format": "int64",
"description": "The interval (in seconds) between each push of checkpoints to object store.\n\nDefault: disabled (no periodic push).",
"nullable": true,
"minimum": 0
},
"region": {
"type": "string",
"description": "The region that this bucket is in.\n\nLeave empty for Minio or the default region (`us-east-1` for AWS).",
"nullable": true
},
"retention_min_age": {
"type": "integer",
"format": "int32",
"description": "The minimum age (in days) a checkpoint must reach before it becomes\neligible for deletion. All younger checkpoints will be preserved.\n\nDefault: 30",
"default": 30,
"minimum": 0
},
"retention_min_count": {
"type": "integer",
"format": "int32",
"description": "The minimum number of checkpoints to retain in object store.\nNo checkpoints will be deleted if the total count is below this threshold.\n\nDefault: 10",
"default": 10,
"minimum": 0
},
"secret_key": {
"type": "string",
"description": "The secret key used together with the access key for authentication.\n\nIf not provided, rclone will fall back to environment-based credentials, such as\n`RCLONE_S3_SECRET_ACCESS_KEY`. In Kubernetes environments using IRSA (IAM Roles for Service Accounts),\nthis can be left empty to allow automatic authentication via the pod's service account.",
"nullable": true
},
"standby": {
"type": "boolean",
"description": "When `true`, the pipeline starts in **standby** mode; processing doesn't\nstart until activation (`POST /activate`).\nIf this pipeline was previously activated and the storage has not been\ncleared, the pipeline will auto activate, no newer checkpoints will be\nfetched.\n\nStandby behavior depends on `start_from_checkpoint`:\n- If `latest`, pipeline continuously fetches the latest available\ncheckpoint until activated.\n- If checkpoint UUID, pipeline fetches this checkpoint once and waits\nin standby until activated.\n\nDefault: `false`",
"default": false
},
"start_from_checkpoint": {
"allOf": [
{
"$ref": "#/components/schemas/StartFromCheckpoint"
}
],
"nullable": true
},
"transfers": {
"type": "integer",
"format": "int32",
"description": "The number of file transfers to run in parallel.\nDefault: 20",
"nullable": true,
"minimum": 0
},
"upload_concurrency": {
"type": "integer",
"format": "int32",
"description": "The number of chunks of the same file that are uploaded for multipart uploads.\nDefault: 10",
"nullable": true,
"minimum": 0
}
}
},
"TemporarySuspendError": {
"oneOf": [
{
"type": "string",
"enum": [
"Replaying"
]
},
{
"type": "string",
"enum": [
"Bootstrapping"
]
},
{
"type": "string",
"enum": [
"TransactionInProgress"
]
},
{
"type": "object",
"required": [
"InputEndpointBarrier"
],
"properties": {
"InputEndpointBarrier": {
"type": "string"
}
}
},
{
"type": "string",
"enum": [
"Coordination"
]
}
],
"description": "Reasons why a pipeline cannot be suspended at this time."
},
"TenantId": {
"type": "string",
"format": "uuid"
},
"TimeSeries": {
"type": "object",
"description": "Time series to make graphs in the web console easier.",
"required": [
"now",
"samples"
],
"properties": {
"now": {
"type": "string",
"format": "date-time",
"description": "Current time as of the creation of the structure."
},
"samples": {
"type": "array",
"items": {
"$ref": "#/components/schemas/SampleStatistics"
},
"description": "Time series.\n\nThese report 60 seconds of samples, one per second."
}
}
},
"TransactionInitiators": {
"type": "object",
"description": "Information about entities that initiated the current transaction.",
"required": [
"initiated_by_connectors"
],
"properties": {
"initiated_by_api": {
"allOf": [
{
"$ref": "#/components/schemas/TransactionPhase"
}
],
"nullable": true
},
"initiated_by_connectors": {
"type": "object",
"description": "Transaction phases initiated by connectors, indexed by endpoint name.",
"additionalProperties": {
"$ref": "#/components/schemas/ConnectorTransactionPhase"
}
},
"transaction_id": {
"type": "integer",
"format": "int64",
"description": "ID assigned to the transaction (None if no transaction is in progress).",
"nullable": true
}
}
},
"TransactionPhase": {
"type": "string",
"description": "Transaction phase.",
"enum": [
"Started",
"Committed"
]
},
"TransactionStatus": {
"type": "string",
"description": "Transaction status summarized as a single value.",
"enum": [
"NoTransaction",
"TransactionInProgress",
"CommitInProgress"
]
},
"TransportConfig": {
"oneOf": [
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/FileInputConfig"
},
"name": {
"type": "string",
"enum": [
"file_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/FileOutputConfig"
},
"name": {
"type": "string",
"enum": [
"file_output"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/NatsInputConfig"
},
"name": {
"type": "string",
"enum": [
"nats_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/KafkaInputConfig"
},
"name": {
"type": "string",
"enum": [
"kafka_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/KafkaOutputConfig"
},
"name": {
"type": "string",
"enum": [
"kafka_output"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/PubSubInputConfig"
},
"name": {
"type": "string",
"enum": [
"pub_sub_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/UrlInputConfig"
},
"name": {
"type": "string",
"enum": [
"url_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/S3InputConfig"
},
"name": {
"type": "string",
"enum": [
"s3_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/DeltaTableReaderConfig"
},
"name": {
"type": "string",
"enum": [
"delta_table_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/DeltaTableWriterConfig"
},
"name": {
"type": "string",
"enum": [
"delta_table_output"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/RedisOutputConfig"
},
"name": {
"type": "string",
"enum": [
"redis_output"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/IcebergReaderConfig"
},
"name": {
"type": "string",
"enum": [
"iceberg_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/PostgresReaderConfig"
},
"name": {
"type": "string",
"enum": [
"postgres_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/PostgresWriterConfig"
},
"name": {
"type": "string",
"enum": [
"postgres_output"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/DatagenInputConfig"
},
"name": {
"type": "string",
"enum": [
"datagen"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/NexmarkInputConfig"
},
"name": {
"type": "string",
"enum": [
"nexmark"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/HttpInputConfig"
},
"name": {
"type": "string",
"enum": [
"http_input"
]
}
}
},
{
"type": "object",
"required": [
"name"
],
"properties": {
"name": {
"type": "string",
"enum": [
"http_output"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/AdHocInputConfig"
},
"name": {
"type": "string",
"enum": [
"ad_hoc_input"
]
}
}
},
{
"type": "object",
"required": [
"name",
"config"
],
"properties": {
"config": {
"$ref": "#/components/schemas/ClockConfig"
},
"name": {
"type": "string",
"enum": [
"clock_input"
]
}
}
}
],
"description": "Transport-specific endpoint configuration passed to\n`crate::OutputTransport::new_endpoint`\nand `crate::InputTransport::new_endpoint`.",
"discriminator": {
"propertyName": "name"
}
},
"UpdateInformation": {
"type": "object",
"required": [
"latest_version",
"is_latest_version",
"instructions_url",
"remind_schedule"
],
"properties": {
"instructions_url": {
"type": "string",
"description": "URL that navigates the user to instructions on how to update their deployment's version"
},
"is_latest_version": {
"type": "boolean",
"description": "Whether the current version matches the latest version"
},
"latest_version": {
"type": "string",
"description": "Latest version corresponding to the edition"
},
"remind_schedule": {
"$ref": "#/components/schemas/DisplaySchedule"
}
}
},
"UrlInputConfig": {
"type": "object",
"description": "Configuration for reading data from an HTTP or HTTPS URL with\n`UrlInputTransport`.",
"required": [
"path"
],
"properties": {
"path": {
"type": "string",
"description": "URL."
},
"pause_timeout": {
"type": "integer",
"format": "int32",
"description": "Timeout before disconnection when paused, in seconds.\n\nIf the pipeline is paused, or if the input adapter reads data faster\nthan the pipeline can process it, then the controller will pause the\ninput adapter. If the input adapter stays paused longer than this\ntimeout, it will drop the network connection to the server. It will\nautomatically reconnect when the input adapter starts running again.",
"minimum": 0
}
}
},
"UserAndPassword": {
"type": "object",
"required": [
"user",
"password"
],
"properties": {
"password": {
"type": "string"
},
"user": {
"type": "string"
}
}
},
"Version": {
"type": "integer",
"format": "int64",
"description": "Version number."
}
},
"securitySchemes": {
"JSON web token (JWT) or API key": {
"type": "http",
"scheme": "bearer",
"bearerFormat": "JWT",
"description": "Use a JWT token obtained via an OAuth2/OIDC\n login workflow or an API key obtained via\n the `/v0/api-keys` endpoint."
}
}
},
"tags": [
{
"name": "Pipeline management",
"description": "Create, retrieve, update, delete and deploy pipelines."
},
{
"name": "Pipeline interaction",
"description": "Interact with deployed pipelines."
},
{
"name": "Configuration",
"description": "Retrieve configuration."
},
{
"name": "API keys",
"description": "Create, retrieve and delete API keys."
},
{
"name": "Metrics",
"description": "Retrieve metrics across pipelines."
}
]
}